Sample records for marginal probability density

  1. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  2. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  3. Evaluating Oilseed Biofuel Production Feasibility in California’s San Joaquin Valley Using Geophysical and Remote Sensing Techniques

    PubMed Central

    Corwin, Dennis L.; Yemoto, Kevin; Clary, Wes; Banuelos, Gary; Skaggs, Todd H.; Lesch, Scott M.

    2017-01-01

    Though more costly than petroleum-based fuels and a minor component of overall military fuel sources, biofuels are nonetheless strategically valuable to the military because of intentional reliance on multiple, reliable, secure fuel sources. Significant reduction in oilseed biofuel cost occurs when grown on marginally productive saline-sodic soils plentiful in California’s San Joaquin Valley (SJV). The objective is to evaluate the feasibility of oilseed production on marginal soils in the SJV to support a 115 ML yr−1 biofuel conversion facility. The feasibility evaluation involves: (1) development of an Ida Gold mustard oilseed yield model for marginal soils; (2) identification of marginally productive soils; (3) development of a spatial database of edaphic factors influencing oilseed yield and (4) performance of Monte Carlo simulations showing potential biofuel production on marginally productive SJV soils. The model indicates oilseed yield is related to boron, salinity, leaching fraction, and water content at field capacity. Monte Carlo simulations for the entire SJV fit a shifted gamma probability density function: Q = 68.986 + gamma (6.134,5.285), where Q is biofuel production in ML yr−1. The shifted gamma cumulative density function indicates a 0.15–0.17 probability of meeting the target biofuel-production level of 115 ML yr−1, making adequate biofuel production unlikely. PMID:29036925

  4. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    NASA Astrophysics Data System (ADS)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;

  5. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  6. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  7. Realistic respiratory motion margins for external beam partial breast irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conroy, Leigh; Quirk, Sarah; Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4

    Purpose: Respiratory margins for partial breast irradiation (PBI) have been largely based on geometric observations, which may overestimate the margin required for dosimetric coverage. In this study, dosimetric population-based respiratory margins and margin formulas for external beam partial breast irradiation are determined. Methods: Volunteer respiratory data and anterior–posterior (AP) dose profiles from clinical treatment plans of 28 3D conformal radiotherapy (3DCRT) PBI patient plans were used to determine population-based respiratory margins. The peak-to-peak amplitudes (A) of realistic respiratory motion data from healthy volunteers were scaled from A = 1 to 10 mm to create respiratory motion probability density functions. Dosemore » profiles were convolved with the respiratory probability density functions to produce blurred dose profiles accounting for respiratory motion. The required margins were found by measuring the distance between the simulated treatment and original dose profiles at the 95% isodose level. Results: The symmetric dosimetric respiratory margins to cover 90%, 95%, and 100% of the simulated treatment population were 1.5, 2, and 4 mm, respectively. With patient set up at end exhale, the required margins were larger in the anterior direction than the posterior. For respiratory amplitudes less than 5 mm, the population-based margins can be expressed as a fraction of the extent of respiratory motion. The derived formulas in the anterior/posterior directions for 90%, 95%, and 100% simulated population coverage were 0.45A/0.25A, 0.50A/0.30A, and 0.70A/0.40A. The differences in formulas for different population coverage criteria demonstrate that respiratory trace shape and baseline drift characteristics affect individual respiratory margins even for the same average peak-to-peak amplitude. Conclusions: A methodology for determining population-based respiratory margins using real respiratory motion patterns and dose profiles in the AP direction was described. It was found that the currently used respiratory margin of 5 mm in partial breast irradiation may be overly conservative for many 3DCRT PBI patients. Amplitude alone was found to be insufficient to determine patient-specific margins: individual respiratory trace shape and baseline drift both contributed to the dosimetric target coverage. With respiratory coaching, individualized respiratory margins smaller than the full extent of motion could reduce planning target volumes while ensuring adequate coverage under respiratory motion.« less

  8. Pollinator communities in strawberry crops - variation at multiple spatial scales.

    PubMed

    Ahrenfeldt, E J; Klatt, B K; Arildsen, J; Trandem, N; Andersson, G K S; Tscharntke, T; Smith, H G; Sigsgaard, L

    2015-08-01

    Predicting potential pollination services of wild bees in crops requires knowledge of their spatial distribution within fields. Field margins can serve as nesting and foraging habitats for wild bees and can be a source of pollinators. Regional differences in pollinator community composition may affect this spill-over of bees. We studied how regional and local differences affect the spatial distribution of wild bee species richness, activity-density and body size in crop fields. We sampled bees both from the field centre and at two different types of semi-natural field margins, grass strips and hedges, in 12 strawberry fields. The fields were distributed over four regions in Northern Europe, representing an almost 1100 km long north-south gradient. Even over this gradient, daytime temperatures during sampling did not differ significantly between regions and did therefore probably not impact bee activity. Bee species richness was higher in field margins compared with field centres independent of field size. However, there was no difference between centre and margin in body-size or activity-density. In contrast, bee activity-density increased towards the southern regions, whereas the mean body size increased towards the north. In conclusion, our study revealed a general pattern across European regions of bee diversity, but not activity-density, declining towards the field interior which suggests that the benefits of functional diversity of pollinators may be difficult to achieve through spill-over effects from margins to crop. We also identified dissimilar regional patterns in bee diversity and activity-density, which should be taken into account in conservation management.

  9. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  10. Kinetic energy as functional of the correlation hole

    NASA Astrophysics Data System (ADS)

    Nalewajski, Roman F.

    2003-01-01

    Using the marginal decomposition of the many-body probability distribution the electronic kinetic energy is expressed as the functional of the electron density and correlation hole. The analysis covers both the molecule as a whole and its constituent subsystems. The importance of the Fisher information for locality is emphasized.

  11. Diffusion of active chiral particles

    NASA Astrophysics Data System (ADS)

    Sevilla, Francisco J.

    2016-12-01

    The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.

  12. Bayesian source tracking via focalization and marginalization in an uncertain Mediterranean Sea environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L

    2010-07-01

    This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.

  13. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  14. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning

    NASA Astrophysics Data System (ADS)

    Jiang, Runqing; Barnett, Rob B.; Chow, James C. L.; Chen, Jeff Z. Y.

    2007-03-01

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15° increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  15. The use of spatial dose gradients and probability density function to evaluate the effect of internal organ motion for prostate IMRT treatment planning.

    PubMed

    Jiang, Runqing; Barnett, Rob B; Chow, James C L; Chen, Jeff Z Y

    2007-03-07

    The aim of this study is to investigate the effects of internal organ motion on IMRT treatment planning of prostate patients using a spatial dose gradient and probability density function. Spatial dose distributions were generated from a Pinnacle3 planning system using a co-planar, five-field intensity modulated radiation therapy (IMRT) technique. Five plans were created for each patient using equally spaced beams but shifting the angular displacement of the beam by 15 degree increments. Dose profiles taken through the isocentre in anterior-posterior (A-P), right-left (R-L) and superior-inferior (S-I) directions for IMRT plans were analysed by exporting RTOG file data from Pinnacle. The convolution of the 'static' dose distribution D0(x, y, z) and probability density function (PDF), denoted as P(x, y, z), was used to analyse the combined effect of repositioning error and internal organ motion. Organ motion leads to an enlarged beam penumbra. The amount of percentage mean dose deviation (PMDD) depends on the dose gradient and organ motion probability density function. Organ motion dose sensitivity was defined by the rate of change in PMDD with standard deviation of motion PDF and was found to increase with the maximum dose gradient in anterior, posterior, left and right directions. Due to common inferior and superior field borders of the field segments, the sharpest dose gradient will occur in the inferior or both superior and inferior penumbrae. Thus, prostate motion in the S-I direction produces the highest dose difference. The PMDD is within 2.5% when standard deviation is less than 5 mm, but the PMDD is over 2.5% in the inferior direction when standard deviation is higher than 5 mm in the inferior direction. Verification of prostate organ motion in the inferior directions is essential. The margin of the planning target volume (PTV) significantly impacts on the confidence of tumour control probability (TCP) and level of normal tissue complication probability (NTCP). Smaller margins help to reduce the dose to normal tissues, but may compromise the dose coverage of the PTV. Lower rectal NTCP can be achieved by either a smaller margin or a steeper dose gradient between PTV and rectum. With the same DVH control points, the rectum has lower complication in the seven-beam technique used in this study because of the steeper dose gradient between the target volume and rectum. The relationship between dose gradient and rectal complication can be used to evaluate IMRT treatment planning. The dose gradient analysis is a powerful tool to improve IMRT treatment plans and can be used for QA checking of treatment plans for prostate patients.

  16. Adiabatic elimination of inertia of the stochastic microswimmer driven by α -stable noise

    NASA Astrophysics Data System (ADS)

    Noetel, Joerg; Sokolov, Igor M.; Schimansky-Geier, Lutz

    2017-10-01

    We consider a microswimmer that moves in two dimensions at a constant speed and changes the direction of its motion due to a torque consisting of a constant and a fluctuating component. The latter will be modeled by a symmetric Lévy-stable (α -stable) noise. The purpose is to develop a kinetic approach to eliminate the angular component of the dynamics to find a coarse-grained description in the coordinate space. By defining the joint probability density function of the position and of the orientation of the particle through the Fokker-Planck equation, we derive transport equations for the position-dependent marginal density, the particle's mean velocity, and the velocity's variance. At time scales larger than the relaxation time of the torque τϕ, the two higher moments follow the marginal density and can be adiabatically eliminated. As a result, a closed equation for the marginal density follows. This equation, which gives a coarse-grained description of the microswimmer's positions at time scales t ≫τϕ , is a diffusion equation with a constant diffusion coefficient depending on the properties of the noise. Hence, the long-time dynamics of a microswimmer can be described as a normal, diffusive, Brownian motion with Gaussian increments.

  17. Adiabatic elimination of inertia of the stochastic microswimmer driven by α-stable noise.

    PubMed

    Noetel, Joerg; Sokolov, Igor M; Schimansky-Geier, Lutz

    2017-10-01

    We consider a microswimmer that moves in two dimensions at a constant speed and changes the direction of its motion due to a torque consisting of a constant and a fluctuating component. The latter will be modeled by a symmetric Lévy-stable (α-stable) noise. The purpose is to develop a kinetic approach to eliminate the angular component of the dynamics to find a coarse-grained description in the coordinate space. By defining the joint probability density function of the position and of the orientation of the particle through the Fokker-Planck equation, we derive transport equations for the position-dependent marginal density, the particle's mean velocity, and the velocity's variance. At time scales larger than the relaxation time of the torque τ_{ϕ}, the two higher moments follow the marginal density and can be adiabatically eliminated. As a result, a closed equation for the marginal density follows. This equation, which gives a coarse-grained description of the microswimmer's positions at time scales t≫τ_{ϕ}, is a diffusion equation with a constant diffusion coefficient depending on the properties of the noise. Hence, the long-time dynamics of a microswimmer can be described as a normal, diffusive, Brownian motion with Gaussian increments.

  18. Positive contraction mappings for classical and quantum Schrödinger systems

    NASA Astrophysics Data System (ADS)

    Georgiou, Tryphon T.; Pavon, Michele

    2015-03-01

    The classical Schrödinger bridge seeks the most likely probability law for a diffusion process, in path space, that matches marginals at two end points in time; the likelihood is quantified by the relative entropy between the sought law and a prior. Jamison proved that the new law is obtained through a multiplicative functional transformation of the prior. This transformation is characterised by an automorphism on the space of endpoints probability measures, which has been studied by Fortet, Beurling, and others. A similar question can be raised for processes evolving in a discrete time and space as well as for processes defined over non-commutative probability spaces. The present paper builds on earlier work by Pavon and Ticozzi and begins by establishing solutions to Schrödinger systems for Markov chains. Our approach is based on the Hilbert metric and shows that the solution to the Schrödinger bridge is provided by the fixed point of a contractive map. We approach, in a similar manner, the steering of a quantum system across a quantum channel. We are able to establish existence of quantum transitions that are multiplicative functional transformations of a given Kraus map for the cases where the marginals are either uniform or pure states. As in the Markov chain case, and for uniform density matrices, the solution of the quantum bridge can be constructed from the fixed point of a certain contractive map. For arbitrary marginal densities, extensive numerical simulations indicate that iteration of a similar map leads to fixed points from which we can construct a quantum bridge. For this general case, however, a proof of convergence remains elusive.

  19. The dynamics of superclusters - Initial determination of the mass density of the universe at large scales

    NASA Technical Reports Server (NTRS)

    Ford, H. C.; Ciardullo, R.; Harms, R. J.; Bartko, F.

    1981-01-01

    The radial velocities of cluster members of two rich, large superclusters have been measured in order to probe the supercluster mass densities, and simple evolutionary models have been computed to place limits upon the mass density within each supercluster. These superclusters represent true physical associations of size of about 100 Mpc seen presently at an early stage of evolution. One supercluster is weakly bound, the other probably barely bound, but possibly marginally unbound. Gravity has noticeably slowed the Hubble expansion of both superclusters. Galaxy surface-density counts and the density enhancement of Abell clusters within each supercluster were used to derive the ratio of mass densities of the superclusters to the mean field mass density. The results strongly exclude a closed universe.

  20. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  1. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  2. Brownian self-driven particles on the surface of a sphere

    NASA Astrophysics Data System (ADS)

    Apaza, Leonardo; Sandoval, Mario

    2017-08-01

    We present the dynamics of overdamped Brownian self-propelled particles moving on the surface of a sphere. The effect of self-propulsion on the diffusion of these particles is elucidated by determining their angular (azimuthal and polar) mean-square displacement. Short- and long-times analytical expressions for their angular mean-square displacement are offered. Finally, the particles' steady marginal angular probability density functions are also elucidated.

  3. Individualized statistical learning from medical image databases: application to identification of brain lesions.

    PubMed

    Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos

    2014-04-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Individualized Statistical Learning from Medical Image Databases: Application to Identification of Brain Lesions

    PubMed Central

    Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos

    2014-01-01

    This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564

  5. Bayesian analysis of the flutter margin method in aeroelasticity

    DOE PAGES

    Khalil, Mohammad; Poirel, Dominique; Sarkar, Abhijit

    2016-08-27

    A Bayesian statistical framework is presented for Zimmerman and Weissenburger flutter margin method which considers the uncertainties in aeroelastic modal parameters. The proposed methodology overcomes the limitations of the previously developed least-square based estimation technique which relies on the Gaussian approximation of the flutter margin probability density function (pdf). Using the measured free-decay responses at subcritical (preflutter) airspeeds, the joint non-Gaussain posterior pdf of the modal parameters is sampled using the Metropolis–Hastings (MH) Markov chain Monte Carlo (MCMC) algorithm. The posterior MCMC samples of the modal parameters are then used to obtain the flutter margin pdfs and finally the fluttermore » speed pdf. The usefulness of the Bayesian flutter margin method is demonstrated using synthetic data generated from a two-degree-of-freedom pitch-plunge aeroelastic model. The robustness of the statistical framework is demonstrated using different sets of measurement data. In conclusion, it will be shown that the probabilistic (Bayesian) approach reduces the number of test points required in providing a flutter speed estimate for a given accuracy and precision.« less

  6. Ophiolitic basement to the Great Valley forearc basin, California, from seismic and gravity data: Implications for crustal growth at the North American continental margin

    USGS Publications Warehouse

    Godfrey, N.J.; Beaudoin, B.C.; Klemperer, S.L.; Levander, A.; Luetgert, J.; Meltzer, A.; Mooney, W.; Tréhu, A.

    1997-01-01

    The nature of the Great Valley basement, whether oceanic or continental, has long been a source of controversy. A velocity model (derived from a 200-km-long east-west reflection-refraction profile collected south of the Mendocino triple junction, northern California, in 1993), further constrained by density and magnetic models, reveals an ophiolite underlying the Great Valley (Great Valley ophiolite), which in turn is underlain by a westward extension of lower-density continental crust (Sierran affinity material). We used an integrated modeling philosophy, first modeling the seismic-refraction data to obtain a final velocity model, and then modeling the long-wavelength features of the gravity data to obtain a final density model that is constrained in the upper crust by our velocity model. The crustal section of Great Valley ophiolite is 7-8 km thick, and the Great Valley ophiolite relict oceanic Moho is at 11-16 km depth. The Great Valley ophiolite does not extend west beneath the Coast Ranges, but only as far as the western margin of the Great Valley, where the 5-7-km-thick Great Valley ophiolite mantle section dips west into the present-day mantle. There are 16-18 km of lower-density Sierran affinity material beneath the Great Valley ophiolite mantle section, such that a second, deeper, "present-day" continental Moho is at about 34 km depth. At mid-crustal depths, the boundary between the eastern extent of the Great Valley ophiolite and the western extent of Sierran affinity material is a near-vertical velocity and density discontinuity about 80 km east of the western margin of the Great Valley. Our model has important implications for crustal growth at the North American continental margin. We suggest that a thick ophiolite sequence was obducted onto continental material, probably during the Jurassic Nevadan orogeny, so that the Great Valley basement is oceanic crust above oceanic mantle vertically stacked above continental crust and continental mantle.

  7. Soil Carbon Change and Net Energy Associated with Biofuel Production on Marginal Lands: A Regional Modeling Perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bandaru, Varaprasad; Izaurralde, Roberto C.; Manowitz, David H.

    2013-12-01

    The use of marginal lands (MLs) for biofuel production has been contemplated as a promising solution for meeting biofuel demands. However, there have been concerns with spatial location of MLs, their inherent biofuel potential, and possible environmental consequences with the cultivation of energy crops. Here, we developed a new quantitative approach that integrates high-resolution land cover and land productivity maps and uses conditional probability density functions for analyzing land use patterns as a function of land productivity to classify the agricultural lands. We subsequently applied this method to determine available productive croplands (P-CLs) and non-crop marginal lands (NC-MLs) in amore » nine-county Southern Michigan. Furthermore, Spatially Explicit Integrated Modeling Framework (SEIMF) using EPIC (Environmental Policy Integrated Climate) was used to understand the net energy (NE) and soil organic carbon (SOC) implications of cultivating different annual and perennial production systems.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smallwood, D.O.

    In a previous paper Smallwood and Paez (1991) showed how to generate realizations of partially coherent stationary normal time histories with a specified cross-spectral density matrix. This procedure is generalized for the case of multiple inputs with a specified cross-spectral density function and a specified marginal probability density function (pdf) for each of the inputs. The specified pdfs are not required to be Gaussian. A zero memory nonlinear (ZMNL) function is developed for each input to transform a Gaussian or normal time history into a time history with a specified non-Gaussian distribution. The transformation functions have the property that amore » transformed time history will have nearly the same auto spectral density as the original time history. A vector of Gaussian time histories are then generated with the specified cross-spectral density matrix. These waveforms are then transformed into the required time history realizations using the ZMNL function.« less

  9. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  10. Improving experimental phases for strong reflections prior to density modification

    DOE PAGES

    Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...

    2013-09-20

    Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  11. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  12. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  13. Probabilities of good, marginal, and poor flying conditions for space shuttle ferry flights

    NASA Technical Reports Server (NTRS)

    Whiting, D. M.; Guttman, N. B.

    1977-01-01

    Empirical probabilities are provided for good, marginal, and poor flying weather for ferrying the Space Shuttle Orbiter from Edwards AFB, California, to Kennedy Space Center, Florida, and from Edwards AFB to Marshall Space Flight Center, Alabama. Results are given by month for each overall route plus segments of each route. The criteria for defining a day as good, marginal, or poor and the method of computing the relative frequencies and conditional probabilities for monthly reference periods are described.

  14. Exploring the Subtleties of Inverse Probability Weighting and Marginal Structural Models.

    PubMed

    Breskin, Alexander; Cole, Stephen R; Westreich, Daniel

    2018-05-01

    Since being introduced to epidemiology in 2000, marginal structural models have become a commonly used method for causal inference in a wide range of epidemiologic settings. In this brief report, we aim to explore three subtleties of marginal structural models. First, we distinguish marginal structural models from the inverse probability weighting estimator, and we emphasize that marginal structural models are not only for longitudinal exposures. Second, we explore the meaning of the word "marginal" in "marginal structural model." Finally, we show that the specification of a marginal structural model can have important implications for the interpretation of its parameters. Each of these concepts have important implications for the use and understanding of marginal structural models, and thus providing detailed explanations of them may lead to better practices for the field of epidemiology.

  15. Stan : A Probabilistic Programming Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  16. Stan : A Probabilistic Programming Language

    DOE PAGES

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...

    2017-01-01

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  17. An efficient algorithm to compute marginal posterior genotype probabilities for every member of a pedigree with loops

    PubMed Central

    2009-01-01

    Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551

  18. Do marginalized neighbourhoods have less healthy retail food environments? An analysis using Bayesian spatial latent factor and hurdle models.

    PubMed

    Luan, Hui; Minaker, Leia M; Law, Jane

    2016-08-22

    Findings of whether marginalized neighbourhoods have less healthy retail food environments (RFE) are mixed across countries, in part because inconsistent approaches have been used to characterize RFE 'healthfulness' and marginalization, and researchers have used non-spatial statistical methods to respond to this ultimately spatial issue. This study uses in-store features to categorize healthy and less healthy food outlets. Bayesian spatial hierarchical models are applied to explore the association between marginalization dimensions and RFE healthfulness (i.e., relative healthy food access that modelled via a probability distribution) at various geographical scales. Marginalization dimensions are derived from a spatial latent factor model. Zero-inflation occurring at the walkable-distance scale is accounted for with a spatial hurdle model. Neighbourhoods with higher residential instability, material deprivation, and population density are more likely to have access to healthy food outlets within a walkable distance from a binary 'have' or 'not have' access perspective. At the walkable distance scale however, materially deprived neighbourhoods are found to have less healthy RFE (lower relative healthy food access). Food intervention programs should be developed for striking the balance between healthy and less healthy food access in the study region as well as improving opportunities for residents to buy and consume foods consistent with dietary recommendations.

  19. Bayesian inference based on stationary Fokker-Planck sampling.

    PubMed

    Berrones, Arturo

    2010-06-01

    A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.

  20. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  1. Improving experimental phases for strong reflections prior to density modification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uervirojnangkoorn, Monarin; University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck; Hilgenfeld, Rolf, E-mail: hilgenfeld@biochem.uni-luebeck.de

    A genetic algorithm has been developed to optimize the phases of the strongest reflections in SIR/SAD data. This is shown to facilitate density modification and model building in several test cases. Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the mapsmore » can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005 ▶), Acta Cryst. D61, 899–902], the impact of identifying optimized phases for a small number of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. A computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less

  2. Effect of posterior crown margin placement on gingival health.

    PubMed

    Reitemeier, Bernd; Hänsel, Kristina; Walter, Michael H; Kastner, Christian; Toutenburg, Helge

    2002-02-01

    The clinical impact of posterior crown margin placement on gingival health has not been thoroughly quantified. This study evaluated the effect of posterior crown margin placement with multivariate analysis. Ten general dentists reviewed 240 patients with 480 metal-ceramic crowns in a prospective clinical trial. The alloy was randomly selected from 2 high gold, 1 low gold, and 1 palladium alloy. Variables were the alloy used, oral hygiene index score before treatment, location of crown margins at baseline, and plaque index and sulcus bleeding index scores recorded for restored and control teeth after 1 year. The effect of crown margin placement on sulcular bleeding and plaque accumulation was analyzed with regression models (P<.05). The probability of plaque at 1 year increased with increasing oral hygiene index score before treatment. The lingual surfaces demonstrated the highest probability of plaque. The risk of bleeding at intrasulcular posterior crown margins was approximately twice that at supragingival margins. Poor oral hygiene before treatment and plaque also were associated with sulcular bleeding. Facial sites exhibited a lower probability of sulcular bleeding than lingual surfaces. Type of alloy did not influence sulcular bleeding. In this study, placement of crown margins was one of several parameters that affected gingival health.

  3. The Determinants of Career Decisions of Air Force Pilots.

    DTIC Science & Technology

    1981-05-01

    Hypothesis tests comparing these two models will be presented in Chapter VI. Page 114 Prob[J]=fProb(k 1 >X1B, ... kj_ 1 >Xj_1 B, kj <XjB] h(a) da 4.6 * Prob[S...Prob[k >X B, kp > X B] h(a) da 4.5a where h(a) is the marginal density of a . Substituting Equation 4.3, which gave the probability of leaving in...be zero. The model derived in this thesis for the individual decision to separate was based upor individual characteristics and macroeconomic 4

  4. Sci—Thur PM: Planning and Delivery — 04: Respiratory margin derivation and verification in partial breast irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quirk, S; Conroy, L; Smith, WL

    Partial breast irradiation (PBI) following breast-conserving surgery is emerging as an effective means to achieve local control and reduce irradiated breast volume. Patients are planned on a static CT image; however, treatment is delivered while the patient is free-breathing. Respiratory motion can degrade plan quality by reducing target coverage and/or dose homogeneity. A variety of methods can be used to determine the required margin for respiratory motion in PBI. We derive geometric and dosimetric respiratory 1D margin. We also verify the adequacy of the typical 5 mm respiratory margin in 3D by evaluating plan quality for increasing respiratory amplitudes (2–20more » mm). Ten PBI plans were used for dosimetric evaluation. A database of volunteer respiratory data, with similar characteristics to breast cancer patients, was used for this study. We derived a geometric 95%-margin of 3 mm from the population respiratory data. We derived a dosimetric 95%-margin of 2 mm by convolving 1D dose profiles with respiratory probability density functions. The 5 mm respiratory margin is possibly too large when 1D coverage is assessed and could lead to unnecessary normal tissue irradiation. Assessing margins only for coverage may be insufficient; 3D dosimetric assessment revealed degradation in dose homogeneity is the limiting factor, not target coverage. Hotspots increased even for the smallest respiratory amplitudes, while target coverage only degraded at amplitudes greater than 10 mm. The 5 mm respiratory margin is adequate for coverage, but due to plan quality degradation, respiratory management is recommended for patients with respiratory amplitudes greater than 10 mm.« less

  5. Inferring probabilistic stellar rotation periods using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh

    2018-02-01

    Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.

  6. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Brunett, Acacia

    2015-04-26

    The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less

  8. Physical and thermal properties of mud-dominant sediment from the Joetsu Basin in the eastern margin of the Japan Sea

    NASA Astrophysics Data System (ADS)

    Goto, Shusaku; Yamano, Makoto; Morita, Sumito; Kanamatsu, Toshiya; Hachikubo, Akihiro; Kataoka, Satsuki; Tanahashi, Manabu; Matsumoto, Ryo

    2017-12-01

    Physical properties (bulk density and porosity) and thermal properties (thermal conductivity, heat capacity, specific heat, and thermal diffusivity) of sediment are crucial parameters for basin modeling. We measured these physical and thermal properties for mud-dominant sediment recovered from the Joetsu Basin, in the eastern margin of the Japan Sea. To determine thermal conductivity, heat capacity, and thermal diffusivity, the dual-needle probe method was applied. Grain density and grain thermal properties for the mud-dominant sediment were estimated from the measured physical and thermal properties by applying existing models of physical and thermal properties of sediment. We suggest that the grain density, grain thermal conductivity, and grain thermal diffusivity depend on the sediment mineral composition. Conversely, the grain heat capacity and grain specific heat showed hardly any dependency on the mineral composition. We propose empirical formulae for the relationships between: thermal diffusivity and thermal conductivity, and heat capacity and thermal conductivity for the sediment in the Joetsu Basin. These relationships are different from those for mud-dominant sediment in the eastern flank of the Juan de Fuca Ridge presented in previous work, suggesting a difference in mineral composition, probably mainly in the amount of quartz, between the sediments in that area and the Joetsu Basin. Similar studies in several areas of sediments with various mineral compositions would enhance knowledge of the influence of mineral composition.

  9. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  10. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  11. Radiobiological Impact of Reduced Margins and Treatment Technique for Prostate Cancer in Terms of Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Ingelise, E-mail: inje@rn.d; Carl, Jesper; Lund, Bente

    2011-07-01

    Dose escalation in prostate radiotherapy is limited by normal tissue toxicities. The aim of this study was to assess the impact of margin size on tumor control and side effects for intensity-modulated radiation therapy (IMRT) and 3D conformal radiotherapy (3DCRT) treatment plans with increased dose. Eighteen patients with localized prostate cancer were enrolled. 3DCRT and IMRT plans were compared for a variety of margin sizes. A marker detectable on daily portal images was presupposed for narrow margins. Prescribed dose was 82 Gy within 41 fractions to the prostate clinical target volume (CTV). Tumor control probability (TCP) calculations based on themore » Poisson model including the linear quadratic approach were performed. Normal tissue complication probability (NTCP) was calculated for bladder, rectum and femoral heads according to the Lyman-Kutcher-Burman method. All plan types presented essentially identical TCP values and very low NTCP for bladder and femoral heads. Mean doses for these critical structures reached a minimum for IMRT with reduced margins. Two endpoints for rectal complications were analyzed. A marked decrease in NTCP for IMRT plans with narrow margins was seen for mild RTOG grade 2/3 as well as for proctitis/necrosis/stenosis/fistula, for which NTCP <7% was obtained. For equivalent TCP values, sparing of normal tissue was demonstrated with the narrow margin approach. The effect was more pronounced for IMRT than 3DCRT, with respect to NTCP for mild, as well as severe, rectal complications.« less

  12. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  13. Effect of lung and target density on small-field dose coverage and PTV definition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, Patrick D., E-mail: higgi010@umn.edu; Ehler, Eric D.; Cho, Lawrence C.

    We have studied the effect of target and lung density on block margin for small stereotactic body radiotherapy (SBRT) targets. A phantom (50 × 50 × 50 cm{sup 3}) was created in the Pinnacle (V9.2) planning system with a 23-cm diameter lung region of interest insert. Diameter targets of 1.6, 2.0, 3.0, and 4.0 cm were placed in the lung region of interest and centered at a physical depth of 15 cm. Target densities evaluated were 0.1 to 1.0 g/cm{sup 3}, whereas the surrounding lung density was varied between 0.05 and 0.6 g/cm{sup 3}. A dose of 100 cGy wasmore » delivered to the isocenter via a single 6-MV field, and the ratio of the average dose to points defining the lateral edges of the target to the isocenter dose was recorded for each combination. Field margins were varied from none to 1.5 cm in 0.25-cm steps. Data obtained in the phantom study were used to predict planning treatment volume (PTV) margins that would match the clinical PTV and isodose prescription for a clinical set of 39 SBRT cases. The average internal target volume (ITV) density was 0.73 ± 0.17, average local lung density was 0.33 ± 0.16, and average ITV diameter was 2.16 ± 0.8 cm. The phantom results initially underpredicted PTV margins by 0.35 cm. With this offset included in the model, the ratio of predicted-to-clinical PTVs was 1.05 ± 0.32. For a given target and lung density, it was found that treatment margin was insensitive to target diameter, except for the smallest (1.6-cm diameter) target, for which the treatment margin was more sensitive to density changes than the larger targets. We have developed a graphical relationship for block margin as a function of target and lung density, which should save time in the planning phase by shortening the design of PTV margins that can satisfy Radiation Therapy Oncology Group mandated treatment volume ratios.« less

  14. Radiobiological impact of reduced margins and treatment technique for prostate cancer in terms of tumor control probability (TCP) and normal tissue complication probability (NTCP).

    PubMed

    Jensen, Ingelise; Carl, Jesper; Lund, Bente; Larsen, Erik H; Nielsen, Jane

    2011-01-01

    Dose escalation in prostate radiotherapy is limited by normal tissue toxicities. The aim of this study was to assess the impact of margin size on tumor control and side effects for intensity-modulated radiation therapy (IMRT) and 3D conformal radiotherapy (3DCRT) treatment plans with increased dose. Eighteen patients with localized prostate cancer were enrolled. 3DCRT and IMRT plans were compared for a variety of margin sizes. A marker detectable on daily portal images was presupposed for narrow margins. Prescribed dose was 82 Gy within 41 fractions to the prostate clinical target volume (CTV). Tumor control probability (TCP) calculations based on the Poisson model including the linear quadratic approach were performed. Normal tissue complication probability (NTCP) was calculated for bladder, rectum and femoral heads according to the Lyman-Kutcher-Burman method. All plan types presented essentially identical TCP values and very low NTCP for bladder and femoral heads. Mean doses for these critical structures reached a minimum for IMRT with reduced margins. Two endpoints for rectal complications were analyzed. A marked decrease in NTCP for IMRT plans with narrow margins was seen for mild RTOG grade 2/3 as well as for proctitis/necrosis/stenosis/fistula, for which NTCP <7% was obtained. For equivalent TCP values, sparing of normal tissue was demonstrated with the narrow margin approach. The effect was more pronounced for IMRT than 3DCRT, with respect to NTCP for mild, as well as severe, rectal complications. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  15. IGM CONSTRAINTS FROM THE SDSS-III/BOSS DR9 Lyα FOREST TRANSMISSION PROBABILITY DISTRIBUTION FUNCTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Khee-Gan; Hennawi, Joseph F.; Spergel, David N.

    2015-02-01

    The Lyα forest transmission probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the transmission PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from Sloan Digital Sky Survey Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS transmission PDFs, measured at (z) = [2.3, 2.6, 3.0], are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, γ, and temperature at mean density,more » T {sub 0}, where T(Δ) = T {sub 0}Δ{sup γ} {sup –} {sup 1}. We find that a significant population of partial Lyman-limit systems (LLSs) with a column-density distribution slope of β{sub pLLS} ∼ – 2 are required to explain the data at the low-transmission end of transmission PDF, while uncertainties in the mean Lyα forest transmission affect the high-transmission end. After modeling the LLSs and marginalizing over mean transmission uncertainties, we find that γ = 1.6 best describes the data over our entire redshift range, although constraints on T {sub 0} are affected by systematic uncertainties. Within our model framework, isothermal or inverted temperature-density relationships (γ ≤ 1) are disfavored at a significance of over 4σ, although this could be somewhat weakened by cosmological and astrophysical uncertainties that we did not model.« less

  16. Bayes classification of terrain cover using normalized polarimetric data

    NASA Technical Reports Server (NTRS)

    Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.

    1988-01-01

    The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.

  17. Two proposed convergence criteria for Monte Carlo solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less

  18. The electron localization as the information content of the conditional pair density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbina, Andres S.; Torres, F. Javier; Universidad San Francisco de Quito

    2016-06-28

    In the present work, the information gained by an electron for “knowing” about the position of another electron with the same spin is calculated using the Kullback-Leibler divergence (D{sub KL}) between the same-spin conditional pair probability density and the marginal probability. D{sub KL} is proposed as an electron localization measurement, based on the observation that regions of the space with high information gain can be associated with strong correlated localized electrons. Taking into consideration the scaling of D{sub KL} with the number of σ-spin electrons of a system (N{sup σ}), the quantity χ = (N{sup σ} − 1) D{sub KL}f{submore » cut} is introduced as a general descriptor that allows the quantification of the electron localization in the space. f{sub cut} is defined such that it goes smoothly to zero for negligible densities. χ is computed for a selection of atomic and molecular systems in order to test its capability to determine the region in space where electrons are localized. As a general conclusion, χ is able to explain the electron structure of molecules on the basis of chemical grounds with a high degree of success and to produce a clear differentiation of the localization of electrons that can be traced to the fluctuation in the average number of electrons in these regions.« less

  19. SU-F-BRD-09: Is It Sufficient to Use Only Low Density Tissue-Margin to Compensate Inter-Fractionation Setup Uncertainties in Lung Treatment?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, K; Yue, N; Chen, T

    2014-06-15

    Purpose: In lung radiation treatment, PTV is formed with a margin around GTV (or CTV/ITV). Although GTV is most likely of water equivalent density, the PTV margin may be formed with the surrounding low-density tissues, which may lead to unreal dosimetric plan. This study is to evaluate whether the concern of dose calculation inside the PTV with only low density margin could be justified in lung treatment. Methods: Three SBRT cases were analyzed. The PTV from the original plan (Plan-O) was created with a 5–10 mm margin outside the ITV to incorporate setup errors and all mobility from 10 respiratorymore » phases. Test plans were generated with the GTV shifted to the PTV edge to simulate the extreme situations with maximum setup uncertainties. Two representative positions as the very posterior-superior (Plan-PS) and anterior-inferior (Plan-AI) edge were considered. The virtual GTV was assigned a density of 1.0 g.cm−3 and surrounding lung, including the PTV margin, was defined as 0.25 g.cm−3. Also, additional plan with a 1mm tissue-margin instead of full lung-margin was created to evaluate whether a composite-margin (Plan-Comp) has a better approximation for dose calculation. All plans were generated on the average CT using Analytical Anisotropic Algorithm with heterogeneity correction on and all planning parameters/monitor unites remained unchanged. DVH analyses were performed for comparisons. Results: Despite the non-static dose distribution, the high-dose region synchronized with tumor positions. This might due to scatter conditions as greater doses were absorbed in the solid-tumor than in the surrounding low-density lungtissue. However, it still showed missing target coverage in general. Certain level of composite-margin might give better approximation for the dosecalculation. Conclusion: Our exploratory results suggest that with the lungmargin only, the planning dose of PTV might overestimate the coverage of the target during treatment. The significance of this overestimation might warrant further investigation.« less

  20. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach

    PubMed Central

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2018-01-01

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591

  1. MO-FG-CAMPUS-TeP2-04: Optimizing for a Specified Target Coverage Probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fredriksson, A

    2016-06-15

    Purpose: The purpose of this work is to develop a method for inverse planning of radiation therapy margins. When using this method the user specifies a desired target coverage probability and the system optimizes to meet the demand without any explicit specification of margins to handle setup uncertainty. Methods: The method determines which voxels to include in an optimization function promoting target coverage in order to achieve a specified target coverage probability. Voxels are selected in a way that retains the correlation between them: The target is displaced according to the setup errors and the voxels to include are selectedmore » as the union of the displaced target regions under the x% best scenarios according to some quality measure. The quality measure could depend on the dose to the considered structure alone or could depend on the dose to multiple structures in order to take into account correlation between structures. Results: A target coverage function was applied to the CTV of a prostate case with prescription 78 Gy and compared to conventional planning using a DVH function on the PTV. Planning was performed to achieve 90% probability of CTV coverage. The plan optimized using the coverage probability function had P(D98 > 77.95 Gy) = 0.97 for the CTV. The PTV plan using a constraint on minimum DVH 78 Gy at 90% had P(D98 > 77.95) = 0.44 for the CTV. To match the coverage probability optimization, the DVH volume parameter had to be increased to 97% which resulted in 0.5 Gy higher average dose to the rectum. Conclusion: Optimizing a target coverage probability is an easily used method to find a margin that achieves the desired coverage probability. It can lead to reduced OAR doses at the same coverage probability compared to planning with margins and DVH functions.« less

  2. Living on the edge: Space use of Eurasian red squirrels in marginal high-elevation habitat

    NASA Astrophysics Data System (ADS)

    Romeo, Claudia; Wauters, Lucas A.; Preatoni, Damiano; Tosi, Guido; Martinoli, Adriano

    2010-11-01

    In marginal habitats located at the edge of a species' range, environmental conditions are frequently extreme and individuals may be subject to different selective pressures compared to central populations. These so-called edge or marginal populations tend to have lower densities and reproductive rates than populations located in more suitable habitats, but little is known about local adaptations in spacing behavior. We studied space use and social organization in a population of Eurasian red squirrels ( Sciurus vulgaris) in a high-elevation marginal habitat of dwarf mountain pine ( Pinus mugo) and compared it with spacing patterns in high-quality Scots pine ( Pinus sylvestris) forest at lower-elevation. Home ranges and core areas were larger in the marginal habitat. In both habitats, males used larger home ranges than females, but sex differences in core area size were significant only in the edge population. Patterns of core area overlap were similar in both habitats with intra-sexual territoriality among adult females and higher degrees of inter-sexual overlap, typical for the species throughout its range. However, low densities in the edge population resulted in higher female by males overlap in spring-summer, suggesting males increased home ranges and core areas during mating season to augment access to estrus females. Thus, in the marginal habitat, with low food abundance and low population densities, linked with extreme winter conditions, squirrels, especially males, used large home ranges. Finally, squirrels responded more strongly to variation in food availability (inverse relation between home range size and seed abundance), and even to fluctuations in density (inverse relation between core area size and density of animals of the same sex), in the marginal than in the high-quality habitat, suggesting high behavioral plasticity to respond to the ecological constraints in marginal habitats.

  3. Eigenvalue statistics for the sum of two complex Wishart matrices

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2014-09-01

    The sum of independent Wishart matrices, taken from distributions with unequal covariance matrices, plays a crucial role in multivariate statistics, and has applications in the fields of quantitative finance and telecommunication. However, analytical results concerning the corresponding eigenvalue statistics have remained unavailable, even for the sum of two Wishart matrices. This can be attributed to the complicated and rotationally noninvariant nature of the matrix distribution that makes extracting the information about eigenvalues a nontrivial task. Using a generalization of the Harish-Chandra-Itzykson-Zuber integral, we find exact solution to this problem for the complex Wishart case when one of the covariance matrices is proportional to the identity matrix, while the other is arbitrary. We derive exact and compact expressions for the joint probability density and marginal density of eigenvalues. The analytical results are compared with numerical simulations and we find perfect agreement.

  4. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    PubMed

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Probabilistic Design of a Mars Sample Return Earth Entry Vehicle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Mitcheltree, Robert A.

    2002-01-01

    The driving requirement for design of a Mars Sample Return mission is to assure containment of the returned samples. Designing to, and demonstrating compliance with, such a requirement requires physics based tools that establish the relationship between engineer's sizing margins and probabilities of failure. The traditional method of determining margins on ablative thermal protection systems, while conservative, provides little insight into the actual probability of an over-temperature during flight. The objective of this paper is to describe a new methodology for establishing margins on sizing the thermal protection system (TPS). Results of this Monte Carlo approach are compared with traditional methods.

  6. An assessment of PTV margin based on actual accumulated dose for prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Wen, Ning; Kumarasiri, Akila; Nurushev, Teamour; Burmeister, Jay; Xing, Lei; Liu, Dezhi; Glide-Hurst, Carri; Kim, Jinkoo; Zhong, Hualiang; Movsas, Benjamin; Chetty, Indrin J.

    2013-11-01

    The purpose of this work is to present the results of a margin reduction study involving dosimetric and radiobiologic assessment of cumulative dose distributions, computed using an image guided adaptive radiotherapy based framework. Eight prostate cancer patients, treated with 7-9, 6 MV, intensity modulated radiation therapy (IMRT) fields, were included in this study. The workflow consists of cone beam CT (CBCT) based localization, deformable image registration of the CBCT to simulation CT image datasets (SIM-CT), dose reconstruction and dose accumulation on the SIM-CT, and plan evaluation using radiobiological models. For each patient, three IMRT plans were generated with different margins applied to the CTV. The PTV margin for the original plan was 10 mm and 6 mm at the prostate/anterior rectal wall interface (10/6 mm) and was reduced to: (a) 5/3 mm, and (b) 3 mm uniformly. The average percent reductions in predicted tumor control probability (TCP) in the accumulated (actual) plans in comparison to the original plans over eight patients were 0.4%, 0.7% and 11.0% with 10/6 mm, 5/3 mm and 3 mm uniform margin respectively. The mean increase in predicted normal tissue complication probability (NTCP) for grades 2/3 rectal bleeding for the actual plans in comparison to the static plans with margins of 10/6, 5/3 and 3 mm uniformly was 3.5%, 2.8% and 2.4% respectively. For the actual dose distributions, predicted NTCP for late rectal bleeding was reduced by 3.6% on average when the margin was reduced from 10/6 mm to 5/3 mm, and further reduced by 1.0% on average when the margin was reduced to 3 mm. The average reduction in complication free tumor control probability (P+) in the actual plans in comparison to the original plans with margins of 10/6, 5/3 and 3 mm was 3.7%, 2.4% and 13.6% correspondingly. The significant reduction of TCP and P+ in the actual plan with 3 mm margin came from one outlier, where individualizing patient treatment plans through margin adaptation based on biological models, might yield higher quality treatments.

  7. TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebers, J.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  8. Bivariate normal, conditional and rectangular probabilities: A computer program with applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.; Ashwworth, G. R.; Winter, W. R.

    1980-01-01

    Some results for the bivariate normal distribution analysis are presented. Computer programs for conditional normal probabilities, marginal probabilities, as well as joint probabilities for rectangular regions are given: routines for computing fractile points and distribution functions are also presented. Some examples from a closed circuit television experiment are included.

  9. Probable influence of early Carboniferous (Tournaisian-early Visean) geography on the development of Waulsortian and Waulsortian-like mounds

    NASA Astrophysics Data System (ADS)

    King, David T., Jr.

    1990-07-01

    All of the known Tournaisian-early Visean (ca. 360-348 Ma) age carbonate mud mounds (Waulsortian and Waulsortian-like mounds) developed in low paleolatitudes on the southern shelf margin of Laurussia and in the Laurussian interior seaway. The Tournaisian-early Visean geography probably prevented hurricanes, tropical storms, and winter storms from crossing the shelf margin or interior seaway where these mounds developed. Implications of the lack of storm energy on mound development are discussed.

  10. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  11. Smoothing of Transport Plans with Fixed Marginals and Rigorous Semiclassical Limit of the Hohenberg-Kohn Functional

    NASA Astrophysics Data System (ADS)

    Cotar, Codina; Friesecke, Gero; Klüppelberg, Claudia

    2018-06-01

    We prove rigorously that the exact N-electron Hohenberg-Kohn density functional converges in the strongly interacting limit to the strictly correlated electrons (SCE) functional, and that the absolute value squared of the associated constrained search wavefunction tends weakly in the sense of probability measures to a minimizer of the multi-marginal optimal transport problem with Coulomb cost associated to the SCE functional. This extends our previous work for N = 2 ( Cotar etal. in Commun Pure Appl Math 66:548-599, 2013). The correct limit problem has been derived in the physics literature by Seidl (Phys Rev A 60 4387-4395, 1999) and Seidl, Gorigiorgi and Savin (Phys Rev A 75:042511 1-12, 2007); in these papers the lack of a rigorous proofwas pointed out.We also give amathematical counterexample to this type of result, by replacing the constraint of given one-body density—an infinite dimensional quadratic expression in the wavefunction—by an infinite-dimensional quadratic expression in the wavefunction and its gradient. Connections with the Lawrentiev phenomenon in the calculus of variations are indicated.

  12. Healthy Eating and Leisure-Time Activity: Cross-Sectional Analysis of that Role of Work Environments in the U.S.

    PubMed

    Williams, Jessica A R; Arcaya, Mariana; Subramanian, S V

    2017-11-01

    The aim of this study was to evaluate relationships between work context and two health behaviors, healthy eating and leisure-time physical activity (LTPA), in U.S. adults. Using data from the 2010 National Health Interview Survey (NHIS) and Occupational Information Network (N = 14,863), we estimated a regression model to predict the marginal and joint probabilities of healthy eating and adhering to recommended exercise guidelines. Decision-making freedom was positively related to healthy eating and both behaviors jointly. Higher physical load was associated with a lower marginal probability of LTPA, healthy eating, and both behaviors jointly. Smoke and vapor exposures were negatively related to healthy eating and both behaviors. Chemical exposure was positively related to LTPA and both behaviors. Characteristics associated with marginal probabilities were not always predictive of joint outcomes. On the basis of nationwide occupation-specific evidence, workplace characteristics are important for healthy eating and LTPA.

  13. Statistical characterization of portal images and noise from portal imaging systems.

    PubMed

    González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge

    2013-06-01

    In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.

  14. [Demography and nesting ecology of green iguana, Iguana iguana (Squamata: Iguanidae), in 2 exploited populations in Depresión Momposina, Colombia].

    PubMed

    Muñoz, Eliana M; Ortega, Angela M; Bock, Brian C; Páez, Vivian P

    2003-03-01

    We studied the demography and nesting ecology of two populations of Iguana iguana that face heavy exploitation and habitat modification in the Momposina Depression, Colombia. Lineal transect data was analyzed using the Fourier model to provide estimates of social group densities, which was found to differ both within and among populations (1.05-6.0 groups/ha). Mean group size and overall iguana density estimates varied between populations as well (1.5-13.7 iguanas/ha). The density estimates were far lower than those reported from more protected areas in Panama and Venezuela. Iguana densities were consistently higher in sites located along rivers (2.5 iguanas/group) than in sites along the margin of marshes, probably due to vegetational differences (1.5 iguanas/group). There was no correlation between density estimates and estimates of relative abundance (number of iguanas seen/hour/person) due to differing detectabilities of iguana groups among sites. The adult sex ratio (1:2.5 males:females) agreed well with other reports in the literature based upon observation of adult social groups, and probably results from the polygynous mating system in this species rather than a real demographic skew. Nesting in this population occurs from the end of January through March and hatching occurs between April and May. We monitored 34 nests, which suffered little vertebrate predation, perhaps due to the lack of a complete vertebrate fauna in this densely inhabited area, but nests suffered from inundation, cattle trampling, and infestation by phorid fly larvae. Clutch sizes in these populations were lower than all other published reports except for the iguana population on the highly xeric island of Curaçao, implying that adult females in our area are unusually small. We argue that this is more likely the result of the exploitation of these populations rather than an adaptive response to environmentally extreme conditions.

  15. TU-AB-BRB-03: Coverage-Based Treatment Planning to Accommodate Organ Deformable Motions and Contouring Uncertainties for Prostate Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, H.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  16. TU-AB-BRB-02: Stochastic Programming Methods for Handling Uncertainty and Motion in IMRT Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unkelbach, J.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  17. TU-AB-BRB-00: New Methods to Ensure Target Coverage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  18. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  19. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  20. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  1. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    PubMed

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  2. How Can Histograms Be Useful for Introducing Continuous Probability Distributions?

    ERIC Educational Resources Information Center

    Derouet, Charlotte; Parzysz, Bernard

    2016-01-01

    The teaching of probability has changed a great deal since the end of the last century. The development of technologies is indeed part of this evolution. In France, continuous probability distributions began to be studied in 2002 by scientific 12th graders, but this subject was marginal and appeared only as an application of integral calculus.…

  3. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.

    PubMed

    Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A

    2015-01-15

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets

    PubMed Central

    Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.

    2014-01-01

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  5. Morphology of central California continental margin, revealed by long-range side-scan sonar (GLORIA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, J.V.; McCulloch, D.S.; Eittreim, S.L.

    1985-02-01

    Leg 2 of the 4-leg USGS EEZ-SCAN 84 program used GLORIA long-range side-scan sonar to survey the region from Pt. Conception to just south of Pt. Arena, from the shelf break to the 200-nmi coverage. The overlapping digital sonographs were slant-range and anamorphically corrected, and a photomosaic of the sonographs was constructed at a scale of 1:375,000 (1 in. = 11.1 km). The underlying bed rock appears to be an important control in shaping the morphology of this margin. Several faults have sea-floor expression and lie subparallel to the margin. The density of canyons and gullies on the slope variesmore » from south to north, probably because of variations in the characteristics of the bed rock. The slope west of San Francisco is the most dissected segment of the central California slope. Monterey Fan is covered by large-scale bed forms (5-15 m amplitude and 1.5-2.0 km wavelength) over much of its surface. Monterey channel crosses southwestward across the fan, but abruptly turns south along a 40-km long surface fault that coincides with a well-mapped meander loop. The channel loops to the north then turns southward crossing the entire Monterey Fan, at its distal reaches, changes to a broad, braided pattern. Major slumps on the margin have long (> 30 km) scarps, some have slump folds, and one has a debris-flow deposit that can be acoustically traced for more than 75 km. Seventeen new seamounts were mapped. Taney Seamounts are large, rimmed, calderas with diameters of about 15 km each; these appear to be very large explosive or summit-collapse features.« less

  6. Maintenance of traditional cultural orientation is associated with lower rates of obesity and sedentary behaviours among African migrant children to Australia.

    PubMed

    Renzaho, A M N; Swinburn, B; Burns, C

    2008-04-01

    Migrants from developing to developed countries rapidly develop more obesity than the host population. While the effects of socio-economic status on obesity are well established, the influence of cultural factors, including acculturation, is not known. To examine the association between acculturation and obesity and its risk factors among African migrant children in Australia. A cross-sectional study using a non-probability sample of 3- to 12-year-old sub-Saharan African migrant children. A bidimensional model of strength of affiliation with African and Australian cultures was used to divide the sample into four cultural orientations: traditional (African), assimilated (Australian), integrated (both) and marginalized (neither). Body mass index (BMI), leisure-time physical activity (PA) and sedentary behaviours (SBs) and energy density of food. In all, 18.4% (95% confidence interval (CI): 14-23%) were overweight and 8.6% (95% CI: 6-12%) were obese. After adjustment for confounders, integrated (beta=1.1; P<0.05) and marginalized (beta=1.4; P<0.01) children had higher BMI than traditional children. However, integrated children had significantly higher time engaged in both PA (beta=46.9, P<0.01) and SBs (beta=43.0, P<0.05) than their traditional counterparts. In comparison with traditional children, assimilated children were more sedentary (beta=57.5, P<0.01) while marginalization was associated with increased consumption of energy-dense foods (beta=42.0, P<0.05). Maintenance of traditional orientation was associated with lower rates of obesity and SBs. Health promotion programs and frameworks need to be rooted in traditional values and habits to maintain and reinforce traditional dietary and PA habits, as well as identify the marginalized clusters and address their needs.

  7. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  8. Endocochlear potential generation is associated with intercellular communication in the stria vascularis: structural analysis in the viable dominant spotting mouse mutant.

    PubMed

    Carlisle, L; Steel, K; Forge, A

    1990-11-01

    Deafness in the viable dominant spotting mouse mutant is due to a primary defect of the stria vascularis which results in absence of the positive endocochlear potential in scala media. Endocochlear potentials were measured and the structure of stria vascularis of mutants with potentials close to zero was compared with that in normal littermate controls by use of morphometric methods. The stria vascularis was significantly thinner in mutants. Marginal cells were not significantly different from controls in terms of volume density or intramembrane particle density but the network density of tight junctions was significantly reduced in the mutants. A virtual absence of gap junctions between basal cells and marginal or intermediate cells was observed, but intramembrane particle density and junctional complexes between adjacent basal cells were not different from controls. The volume density of basal cells was significantly greater in mutants. Intermediate cells accounted for a significantly smaller volume density of the stria vascularis in mutants and had a lower density of intramembrane particles than controls. Melanocytes were not identified in the stria vascularis of mutants. These results suggest that communication between marginal, intermediate and basal cells might be important to the normal function of the stria vascularis.

  9. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  10. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  11. Metocean design parameter estimation for fixed platform based on copula functions

    NASA Astrophysics Data System (ADS)

    Zhai, Jinjin; Yin, Qilin; Dong, Sheng

    2017-08-01

    Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.

  12. Lithospheric thickness jumps at the S-Atlantic continental margins from satellite gravity data and modelled isostatic anomalies

    NASA Astrophysics Data System (ADS)

    Shahraki, Meysam; Schmeling, Harro; Haas, Peter

    2018-01-01

    Isostatic equilibrium is a good approximation for passive continental margins. In these regions, geoid anomalies are proportional to the local dipole moment of density-depth distributions, which can be used to constrain the amount of oceanic to continental lithospheric thickening (lithospheric jumps). We consider a five- or three-layer 1D model for the oceanic and continental lithosphere, respectively, composed of water, a sediment layer (both for the oceanic case), the crust, the mantle lithosphere and the asthenosphere. The mantle lithosphere is defined by a mantle density, which is a function of temperature and composition, due to melt depletion. In addition, a depth-dependent sediment density associated with compaction and ocean floor variation is adopted. We analyzed satellite derived geoid data and, after filtering, extracted typical averaged profiles across the Western and Eastern passive margins of the South Atlantic. They show geoid jumps of 8.1 m and 7.0 m for the Argentinian and African sides, respectively. Together with topography data and an averaged crustal density at the conjugate margins these jumps are interpreted as isostatic geoid anomalies and yield best-fitting crustal and lithospheric thicknesses. In a grid search approach five parameters are systematically varied, namely the thicknesses of the sediment layer, the oceanic and continental crusts and the oceanic and the continental mantle lithosphere. The set of successful models reveals a clear asymmetry between the South Africa and Argentine lithospheres by 15 km. Preferred models predict a sediment layer at the Argentine margin of 3-6 km and at the South Africa margin of 1-2.5 km. Moreover, we derived a linear relationship between, oceanic lithosphere, sediment thickness and lithospheric jumps at the South Atlantic margins. It suggests that the continental lithospheres on the western and eastern South Atlantic are thicker by 45-70 and 60-80 km than the oceanic lithospheres, respectively.

  13. Margin selection to compensate for loss of target dose coverage due to target motion during external‐beam radiation therapy of the lung

    PubMed Central

    Osei, Ernest; Barnett, Rob

    2015-01-01

    The aim of this study is to provide guidelines for the selection of external‐beam radiation therapy target margins to compensate for target motion in the lung during treatment planning. A convolution model was employed to predict the effect of target motion on the delivered dose distribution. The accuracy of the model was confirmed with radiochromic film measurements in both static and dynamic phantom modes. 502 unique patient breathing traces were recorded and used to simulate the effect of target motion on a dose distribution. A 1D probability density function (PDF) representing the position of the target throughout the breathing cycle was generated from each breathing trace obtained during 4D CT. Changes in the target D95 (the minimum dose received by 95% of the treatment target) due to target motion were analyzed and shown to correlate with the standard deviation of the PDF. Furthermore, the amount of target D95 recovered per millimeter of increased field width was also shown to correlate with the standard deviation of the PDF. The sensitivity of changes in dose coverage with respect to target size was also determined. Margin selection recommendations that can be used to compensate for loss of target D95 were generated based on the simulation results. These results are discussed in the context of clinical plans. We conclude that, for PDF standard deviations less than 0.4 cm with target sizes greater than 5 cm, little or no additional margins are required. Targets which are smaller than 5 cm with PDF standard deviations larger than 0.4 cm are most susceptible to loss of coverage. The largest additional required margin in this study was determined to be 8 mm. PACS numbers: 87.53.Bn, 87.53.Kn, 87.55.D‐, 87.55.Gh

  14. Technology-Enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    ERIC Educational Resources Information Center

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like "What is the chance of event A occurring, given that event B was observed?" This generic question arises in discussions of many intriguing scientific questions such as "What is the probability that an adolescent weighs between 120 and 140 pounds given that…

  15. Tectonic evolution of the northern African margin in Tunisia from paleostress data and sedimentary record

    NASA Astrophysics Data System (ADS)

    Bouaziz, Samir; Barrier, Eric; Soussi, Mohamed; Turki, Mohamed M.; Zouari, Hédi

    2002-11-01

    A reconstruction of the tectonic evolution of the northern African margin in Tunisia since the Late Permian combining paleostress, tectonic stratigraphic and sedimentary approaches allows the characterization of several major periods corresponding to consistent stress patterns. The extension lasting from the Late Permian to the Middle Triassic is contemporaneous of the rifting related to the break up of Pangea. During Liassic times, regional extensional tectonics originated the dislocation of the initial continental platform. In northern Tunisia, the evolution of the Liassic NE-SW rifting led during Dogger times to the North African passive continental margin, whereas in southern Tunisia, a N-S extension, associated with E-W trending subsiding basins, lasted from the Jurassic until the Early Cretaceous. After an Upper Aptian-Early Albian transpressional event, NE-SW to ENE-WSW trending extensions prevailed during Late Cretaceous in relationship with the general tectonic evolution of the northeastern African plate. The inversions started in the Late Maastrichtian-Paleocene in northern Tunisia, probably as a consequence of the Africa-Eurasia convergence. Two major NW-SE trending compressions occurred in the Late Eocene and in the Middle-Late Miocene alternating with extensional periods in the Eocene, Oligocene, Early-Middle Miocene and Pliocene. The latter compressional event led to the complete inversion of the basins of the northwestern African plate, originating the Maghrebide chain. Such a study, supported by a high density of paleostress data and including complementary structural and stratigraphic approaches, provides a reliable way of determining the regional tectonic evolution.

  16. The Continent-Ocean Transition in the Mid-Norwegian Margin: Insight From Seismic Data and the Onshore Caledonian Analogue in the Seve Nappe Complex

    NASA Astrophysics Data System (ADS)

    Abdelmalak, Mansour M.; Planke, Sverre; Andersen, Torgeir B.; Faleide, Jan Inge; Corfu, Fernando; Tegner, Christian; Myklebust, Reidun

    2015-04-01

    The continental breakup and initial seafloor spreading in the NE Atlantic was accompanied by widespread intrusive and extrusive magmatism and the formation of conjugate volcanic passive margins. These margins are characterized by the presence of seaward dipping reflectors (SDR), an intense network of mafic sheet intrusions of the continental crust and adjacent sedimentary basins and a high-velocity lower crustal body. Nevertheless many issues remain unclear regarding the structure of volcanic passive margins; in particular the transitional crust located beneath the SDR.New and reprocessed seismic reflection data on the Mid-Norwegian margin allow a better sub-basalt imaging of the transitional crust located beneath the SDR. Different high-amplitude reflections with abrupt termination and saucer shaped geometries are identified and interpreted as sill intrusions. Other near vertical and inclined reflections are interpreted as dykes or dyke swarms. We have mapped the extent of the dyke reflections along the volcanic margin. The mapping suggests that the dykes represent the main feeder system for the SDR. The identification of saucer shaped sills implies the presence of sediments in the transitional zone beneath the volcanic sequences. Onshore exposures of Precambrian basement of the eroded volcanic margin in East Greenland show that, locally, the transitional crust is highly intruded by dykes and intrusive complexes with an increasing intensity of the plumbing and dilatation of the continental crust ocean-ward. Another well exposed analogue for a continent-ocean transitional crust is located within the Seve Nappe Complex (SNC) of the Scandinavian Caledonides. The best-preserved parts of SNC in the Pårte, Sarek, Kebnekaise, Abisko, and Indre Troms mountains are composed mainly of meta-sandstones and shales (now hornfelses) truncated typically by mafic dykes. At Sarek and Pårte, the dykes intrude the sedimentary rocks of the Favoritkammen Group, with a dyke density up to 70-80%. This complex was photographed in a regional helicopter survey and sampled for the study of the different dyke generations, their geochemistry and ages in 2014. Extending for at least 800 km within the SNC, the mafic igneous rocks most probably belonged to a volcanic system with the size of a large igneous province (LIP). This volcanic margin is suggested to have formed along the Caledonian margin of Baltica or within hyperextended continental slivers outboard of Baltica during the breakup of Rodinia. The intensity of the pre-Caledonian LIP-magmatism is comparable to that of the NE Atlantic volcanic margins. The SNC-LIP is considered to represent a potential onshore analogue to the deeper level of the Mid-Norwegian margin transitional crust, and permits direct observation, sampling and better understanding of deeper levels of magma-rich margins.

  17. WE-AB-209-08: Novel Beam-Specific Adaptive Margins for Reducing Organ-At-Risk Doses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsang, H; Kamerling, CP; Ziegenhein, P

    2016-06-15

    Purpose: Current practice of using 3D margins in radiotherapy with high-energy photon beams provides larger-than-required target coverage. According to the photon depth-dose curve, target displacements in beam direction result in minute changes in dose delivered. We exploit this behavior by generating margins on a per-beam basis which simultaneously account for the relative distance of the target and adjacent organs-at-risk (OARs). Methods: For each beam, we consider only geometrical uncertainties of the target location perpendicular to beam direction. By weighting voxels based on its proximity to an OAR, we generate adaptive margins that yield similar overall target coverage probability and reducedmore » OAR dose-burden, at the expense of increased target volume. Three IMRT plans, using 3D margins and 2D per-beam margins with and without adaptation, were generated for five prostate patients with a prescription dose Dpres of 78Gy in 2Gy fractions using identical optimisation constraints. Systematic uncertainties of 1.1, 1.1, 1.5mm in the LR, SI, and AP directions, respectively, and 0.9, 1.1, 1.0mm for the random uncertainties, were assumed. A verification tool was employed to simulate the effects of systematic and random errors using a population size of 50,000. The fraction of the population that satisfies or violates a given DVH constraint was used for comparison. Results: We observe similar target coverage across all plans, with at least 97.5% of the population meeting the D98%>95%Dpres constraint. When looking at the probability of the population receiving D5<70Gy for the rectum, we observed median absolute increases of 23.61% (range, 2.15%–27.85%) and 6.97% (range, 0.65%–17.76%) using per-beam margins with and without adaptation, respectively, relative to using 3D margins. Conclusion: We observed sufficient and similar target coverage using per-beam margins. By adapting each per-beam margin away from an OAR, we can further reduce OAR dose without significantly lowering target coverage probability by irradiating more less-important tissues. This work is supported by Cancer Research UK under Programme C33589/A19908. Research at ICR is also supported by Cancer Research UK under Programme C33589/A19727 and NHS funding to the NIHR Biomedical Research Centre at RMH and ICR.« less

  18. Robust approaches to quantification of margin and uncertainty for sparse data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin

    Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less

  19. Dosimetric and radiobiological consequences of computed tomography-guided adaptive strategies for intensity modulated radiation therapy of the prostate.

    PubMed

    Battista, Jerry J; Johnson, Carol; Turnbull, David; Kempe, Jeff; Bzdusek, Karl; Van Dyk, Jacob; Bauman, Glenn

    2013-12-01

    To examine a range of scenarios for image-guided adaptive radiation therapy of prostate cancer, including different schedules for megavoltage CT imaging, patient repositioning, and dose replanning. We simulated multifraction dose distributions with deformable registration using 35 sets of megavoltage CT scans of 13 patients. We computed cumulative dose-volume histograms, from which tumor control probabilities and normal tissue complication probabilities (NTCPs) for rectum were calculated. Five-field intensity modulated radiation therapy (IMRT) with 18-MV x-rays was planned to achieve an isocentric dose of 76 Gy to the clinical target volume (CTV). The differences between D95, tumor control probability, V70Gy, and NTCP for rectum, for accumulated versus planned dose distributions, were compared for different target volume sizes, margins, and adaptive strategies. The CTV D95 for IMRT treatment plans, averaged over 13 patients, was 75.2 Gy. Using the largest CTV margins (10/7 mm), the D95 values accumulated over 35 fractions were within 2% of the planned value, regardless of the adaptive strategy used. For tighter margins (5 mm), the average D95 values dropped to approximately 73.0 Gy even with frequent repositioning, and daily replanning was necessary to correct this deficit. When personalized margins were applied to an adaptive CTV derived from the first 6 treatment fractions using the STAPLE (Simultaneous Truth and Performance Level Estimation) algorithm, target coverage could be maintained using a single replan 1 week into therapy. For all approaches, normal tissue parameters (rectum V(70Gy) and NTCP) remained within acceptable limits. The frequency of adaptive interventions depends on the size of the CTV combined with target margins used during IMRT optimization. The application of adaptive target margins (<5 mm) to an adaptive CTV determined 1 week into therapy minimizes the need for subsequent dose replanning. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  1. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  2. INTEGRATION OF RELIABILITY WITH MECHANISTIC THERMALHYDRAULICS: REPORT ON APPROACH AND TEST PROBLEM RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. Schroeder; R. W. Youngblood

    The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less

  3. LES/PDF studies of joint statistics of mixture fraction and progress variable in piloted methane jet flames with inhomogeneous inlet flows

    NASA Astrophysics Data System (ADS)

    Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng

    2016-11-01

    The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.

  4. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  5. Permeability structure and its influence on microbial activity at off-Shimokita basin, Japan

    NASA Astrophysics Data System (ADS)

    Tanikawa, W.; Yamada, Y.; Sanada, Y.; Kubo, Y.; Inagaki, F.

    2016-12-01

    The microbial populations and the limit of microbial life are probably limited by chemical, physical, and geological conditions, such as temperature, pore water chemistry, pH, and water activity; however, the key parameters affecting growth in deep subseafloor sediments remain unclarified (Hinrichs and Inagaki 2012). IODP expedition 337 was conducted near a continental margin basin off Shimokita Peninsula, Japan to investigate the microbial activity under deep marine coalbed sediments down to 2500 mbsf. Inagaki et al. (2015) discovered that microbial abundance decreased markedly with depth (the lowest cell density of <1 cell/cm3 was recorded below 2000 mbsf), and that the coal bed layers had relatively higher cell densities. In this study, permeability was measured on core samples from IODP Expedition 337 and Expedition CK06-06 in the D/V Chikyu shakedown cruise. Permeability was measured at in-situ effective pressure condition. Permeability was calculated by the steady state flow method by keeping differential pore pressure from 0.1 to 0.8 MPa.Our results show that the permeability for core samples decreases with depth from 10-16 m2 on the seafloor to 10-20 m2 at the bottom of hole. However, permeability is highly scattered within the coal bed unit (1900 to 2000 mbsf). Permeabilities for sandstone and coal is higher than those for siltstone and shale, therefore the scatter of the permeabilities at the same unit is due to the high variation of lithology. The highest permeability was observed in coal samples and this is probably due to formation of micro cracks (cleats). Permeability estimated from the NMR logging using the empirical parameters is around two orders of magnitude higher than permeability of core samples, even though the relative permeability variation at vertical direction is quite similar between core and logging data.The higher cell density is observed in the relatively permeable formation. On the other hand, the correlation between cell density, water activity, and porosity is not clear. On the assumption that pressure gradient is constant through the depth, flow rate can be proportional to permeability of sediments. Flow rate probably restricts the availability of energy and nutrient for microorganism, therefore permeability might have influenced on the microbial activity in the coalbed basin.

  6. Local linear estimation of concordance probability with application to covariate effects models on association for bivariate failure-time data.

    PubMed

    Ding, Aidong Adam; Hsieh, Jin-Jian; Wang, Weijing

    2015-01-01

    Bivariate survival analysis has wide applications. In the presence of covariates, most literature focuses on studying their effects on the marginal distributions. However covariates can also affect the association between the two variables. In this article we consider the latter issue by proposing a nonstandard local linear estimator for the concordance probability as a function of covariates. Under the Clayton copula, the conditional concordance probability has a simple one-to-one correspondence with the copula parameter for different data structures including those subject to independent or dependent censoring and dependent truncation. The proposed method can be used to study how covariates affect the Clayton association parameter without specifying marginal regression models. Asymptotic properties of the proposed estimators are derived and their finite-sample performances are examined via simulations. Finally, for illustration, we apply the proposed method to analyze a bone marrow transplant data set.

  7. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  8. Tectonic types of marginal and inner seas; their place in the development of the crust

    NASA Astrophysics Data System (ADS)

    Khain, V. E.; Levin, L. E.

    1980-12-01

    Inner and marginal deep seas are of considerable interest not only for their genesis but also as "micromodels" of oceans. In the latter case it must be noted that some of them essentially differ from oceans in several parameters. They have a shorter period of development, thicker sedimentary cover, less distinct linear magnetic anomalies or an absence of them, high heat-flow values and seismic activity over their whole area. Consequently, the analogy with the oceans has certain limitations as the deep structure of such seas is not homogeneous and they probably vary in genesis. Only a few marginal seas are cut off from the principal areas of the oceans by island arcs formed, most probably, along transform faults. The origin of this type is more or less reliably demonstrated for the Bering Sea. Other types of marginal seas are more numerous. Some of them (such as the Gulf of Aden and the Gulf of California) are embryonic apophyses connected with the oceans. Others are atrophied (the Tasman and the Labrador seas) small oceans. The group of marginal and inner seas which lie in the inside zone of mature or young island arcs is even more numerous. Only a few basins of this group resulted from linear spreading imprinted in the system of magnetic anomalies (the Shikoku-Parese-Vela basin), the rest are supposed to have been formed in the process of diffusal or polyaxial spreading of recent time as in Afar. The majority of inner and marginal seas are younger than recent oceans. They are formed by rifting, oriented crosswise to continental margins of the Atlantic type or along the strike of margins of Andean type. More ancient basins of marginal and inner seas have been involved in Phanerozoic orogens or more rarely became parts of platforms (Ciscaspian syneclise).

  9. Small-target leak detection for a closed vessel via infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Yang, Hongjiu

    2017-03-01

    This paper focus on a leak diagnosis and localization method based on infrared image sequences. Some problems on high probability of false warning and negative affect for marginal information are solved by leak detection. An experimental model is established for leak diagnosis and localization on infrared image sequences. The differential background prediction is presented to eliminate the negative affect of marginal information on test vessel based on a kernel regression method. A pipeline filter based on layering voting is designed to reduce probability of leak point false warning. A synthesize leak diagnosis and localization algorithm is proposed based on infrared image sequences. The effectiveness and potential are shown for developed techniques through experimental results.

  10. QTL Mapping of Endocochlear Potential Differences between C57BL/6J and BALB/cJ mice.

    PubMed

    Ohlemiller, Kevin K; Kiener, Anna L; Gagnon, Patricia M

    2016-06-01

    We reported earlier that the endocochlear potential (EP) differs between C57BL/6J (B6) and BALB/cJ (BALB) mice, being lower in BALBs by about 10 mV (Ohlemiller et al. Hear Res 220: 10-26, 2006). This difference corresponds to strain differences with respect to the density of marginal cells in cochlear stria vascularis. After about 1 year of age, BALB mice also tend toward EP reduction that correlates with further marginal cell loss. We therefore suggested that early sub-clinical features of the BALB stria vascularis may predispose these mice to a condition modeling Schuknecht's strial presbycusis. We further reported (Ohlemiller et al. J Assoc Res Otolaryngol 12: 45-58, 2011) that the acute effects of a 2-h 110 dB SPL noise exposure differ between B6 and BALB mice, such that the EP remains unchanged in B6 mice, but is reduced by 40-50 mV in BALBs. In about 25 % of BALBs, the EP does not completely recover, so that permanent EP reduction may contribute to noise-induced permanent threshold shifts in BALBs. To identify genes and alleles that may promote natural EP variation as well as noise-related EP reduction in BALB mice, we have mapped related quantitative trait loci (QTLs) using 12 recombinant inbred (RI) strains formed from B6 and BALB (CxB1-CxB12). EP and strial marginal cell density were measured in B6 mice, BALB mice, their F1 hybrids, and RI mice without noise exposure, and 1-3 h after broadband noise (4-45 kHz, 110 dB SPL, 2 h). For unexposed mice, the strain distribution patterns for EP and marginal cell density were used to generate preliminary QTL maps for both EP and marginal cell density. Six QTL regions were at least statistically suggestive, including a significant QTL for marginal cell density on chromosome 12 that overlapped a weak QTL for EP variation. This region, termed Maced (Marginal cell density QTL) supports the notion of marginal cell density as a genetically influenced contributor to natural EP variation. Candidate genes for Maced notably include Foxg1, Foxa1, Akap6, Nkx2-1, and Pax9. Noise exposure produced significant EP reductions in two RI strains as well as significant EP increases in two RI strains. QTL mapping of the EP in noise-exposed RI mice yielded four suggestive regions. Two of these overlapped with QTL regions we previously identified for noise-related EP reduction in CBA/J mice (Ohlemiller et al. Hear Res 260: 47-53, 2010) on chromosomes 5 and 18 (Nirep). The present map may narrow the Nirep interval to a ~10-Mb region of proximal Chr. 18 that includes Zeb1, Arhgap12, Mpp7, and Gjd4. This study marks the first exploration of natural gene variants that modulate the EP. Their orthologs may underlie some human hearing loss that originates in the lateral wall.

  11. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  12. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  13. Contrasting upper-mantle shear wave anisotropy across the transpressive Queen Charlotte margin

    NASA Astrophysics Data System (ADS)

    Cao, Lingmin; Kao, Honn; Wang, Kelin

    2017-10-01

    In order to investigate upper mantle and crustal anisotropy along the transpressive Queen Charlotte margin between the Pacific (PA) and North America (NA) plates, we conducted shear wave splitting analyses using 17 seismic stations in and around the island of Haida Gwaii, Canada. Despite the limited station coverage at present, our reconnaissance study does reveal a systematic pattern of mantle anisotropy in this region. Fast directions derived from teleseismic SKS-phase splitting are mostly margin-parallel (NNW-SSE) near the plate boundary but transition to predominantly E-W-trending farther away. We propose that the former is associated with the absolute motion of PA, and the latter reflects a transition from this direction to that of the absolute motion of NA. The broad width of the zone of transition from the PA to NA direction is probably caused by the very obliquely subducting PA slab that travels primarily in the margin-parallel direction. Anisotropy of Haida Gwaii based on local earthquakes features a fast direction that cannot be explained with regional stresses and is probably associated with local structural fabric within the overriding crust. Our preliminary shear wave splitting measurements and working hypotheses based on them will serve to guide more refined future studies to unravel details of the geometry and kinematics of the subducted PA slab, as well as the viscous coupling between the slab and upper mantle in other transpressive margins.

  14. A New Approach in Generating Meteorological Forecasts for Ensemble Streamflow Forecasting using Multivariate Functions

    NASA Astrophysics Data System (ADS)

    Khajehei, S.; Madadgar, S.; Moradkhani, H.

    2014-12-01

    The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).

  15. Use of nonlinear programming to optimize performance response to energy density in broiler feed formulation.

    PubMed

    Guevara, V R

    2004-02-01

    A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.

  16. Combined risk assessment of nonstationary monthly water quality based on Markov chain and time-varying copula.

    PubMed

    Shi, Wei; Xia, Jun

    2017-02-01

    Water quality risk management is a global hot research linkage with the sustainable water resource development. Ammonium nitrogen (NH 3 -N) and permanganate index (COD Mn ) as the focus indicators in Huai River Basin, are selected to reveal their joint transition laws based on Markov theory. The time-varying moments model with either time or land cover index as explanatory variables is applied to build the time-varying marginal distributions of water quality time series. Time-varying copula model, which takes the non-stationarity in the marginal distribution and/or the time variation in dependence structure between water quality series into consideration, is constructed to describe a bivariate frequency analysis for NH 3 -N and COD Mn series at the same monitoring gauge. The larger first-order Markov joint transition probability indicates water quality state Class V w , Class IV and Class III will occur easily in the water body of Bengbu Sluice. Both marginal distribution and copula models are nonstationary, and the explanatory variable time yields better performance than land cover index in describing the non-stationarities in the marginal distributions. In modelling the dependence structure changes, time-varying copula has a better fitting performance than the copula with the constant or the time-trend dependence parameter. The largest synchronous encounter risk probability of NH 3 -N and COD Mn simultaneously reaching Class V is 50.61%, while the asynchronous encounter risk probability is largest when NH 3 -N and COD Mn is inferior to class V and class IV water quality standards, respectively.

  17. Essays on refining markets and environmental policy

    NASA Astrophysics Data System (ADS)

    Oladunjoye, Olusegun Akintunde

    This thesis is comprised of three essays. The first two essays examine empirically the relationship between crude oil price and wholesale gasoline prices in the U.S. petroleum refining industry while the third essay determines the optimal combination of emissions tax and environmental research and development (ER&D) subsidy when firms organize ER&D either competitively or as a research joint venture (RJV). In the first essay, we estimate an error correction model to determine the effects of market structure on the speed of adjustment of wholesale gasoline prices, to crude oil price changes. The results indicate that market structure does not have a strong effect on the dynamics of price adjustment in the three regional markets examined. In the second essay, we allow for inventories to affect the relationship between crude oil and wholesale gasoline prices by allowing them to affect the probability of regime change in a Markov-switching model of the refining margin. We find that low gasoline inventory increases the probability of switching from the low margin regime to the high margin regime and also increases the probability of staying in the high margin regime. This is consistent with the predictions of the competitive storage theory. In the third essay, we extend the Industrial Organization R&D theory to the determination of optimal environmental policies. We find that RJV is socially desirable. In comparison to competitive ER&D, we suggest that regulators should encourage RJV with a lower emissions tax and higher subsidy as these will lead to the coordination of ER&D activities and eliminate duplication of efforts while firms internalize their technological spillover externality.

  18. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  19. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  20. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  1. Impact of damming on the Chironomidae of the upper zone of a tropical run-of-the-river reservoir.

    PubMed

    Brandimarte, A L; Anaya, M; Shimizu, G Y

    2016-06-01

    We examined the effects of the Mogi-Guaçu river damming (São Paulo State, Brazil) on the Chironomidae fauna. Pre, during, and post-filling sampling was carried out in the main channel and margins of one site in the upper zone of the reservoir, using a modified Petersen grab (325 cm2). We evaluated the total, subfamily, and tribe densities and also their relative abundance. Analysis of genera included densities, relative abundance, richness, and dominance. The Rosso's ecological value index (EVI) determined the ecological importance of each genus. There was a tendency of decrease of the total Chironomidae density, increase in the percentage of Chironomini, and decrease in densities and percentages of Orthocladiinae and Tanytarsini. These changes in percentage were respectively related to Polypedilum, Lopescladius, and Rheotanytarsus, the genera with the highest EVI values. After-filling richness was lower in the margins and dominance of genera did not change significantly. Chironomidae in the margins was more sensitive to damming than in the main channel. This difference in sensibility sustains the use of Chironomidae as bioindicators. Damming impact was indicated by the reduction of both genera richness in the margins and relative abundance of groups typical of faster waters. The results have highlighted the need for multi-habitat analysis combined with a before-after sampling approach in the environmental impact studies concerning the damming impact on the benthic fauna.

  2. OCT structure, COB location and magmatic type of the S Angolan & SE Brazilian margins from integrated quantitative analysis of deep seismic reflection and gravity anomaly data

    NASA Astrophysics Data System (ADS)

    Cowie, Leanne; Kusznir, Nick; Horn, Brian

    2014-05-01

    Integrated quantitative analysis using deep seismic reflection data and gravity inversion have been applied to the S Angolan and SE Brazilian margins to determine OCT structure, COB location and magmatic type. Knowledge of these margin parameters are of critical importance for understanding rifted continental margin formation processes and in evaluating petroleum systems in deep-water frontier oil and gas exploration. The OCT structure, COB location and magmatic type of the S Angolan and SE Brazilian rifted continental margins are much debated; exhumed and serpentinised mantle have been reported at these margins. Gravity anomaly inversion, incorporating a lithosphere thermal gravity anomaly correction, has been used to determine Moho depth, crustal basement thickness and continental lithosphere thinning. Residual Depth Anomaly (RDA) analysis has been used to investigate OCT bathymetric anomalies with respect to expected oceanic bathymetries and subsidence analysis has been used to determine the distribution of continental lithosphere thinning. These techniques have been validated for profiles Lusigal 12 and ISE-01 on the Iberian margin. In addition a joint inversion technique using deep seismic reflection and gravity anomaly data has been applied to the ION-GXT BS1-575 SE Brazil and ION-GXT CS1-2400 S Angola deep seismic reflection lines. The joint inversion method solves for coincident seismic and gravity Moho in the time domain and calculates the lateral variations in crustal basement densities and velocities along the seismic profiles. Gravity inversion, RDA and subsidence analysis along the ION-GXT BS1-575 profile, which crosses the Sao Paulo Plateau and Florianopolis Ridge of the SE Brazilian margin, predict the COB to be located SE of the Florianopolis Ridge. Integrated quantitative analysis shows no evidence for exhumed mantle on this margin profile. The joint inversion technique predicts oceanic crustal thicknesses of between 7 and 8 km thickness with normal oceanic basement seismic velocities and densities. Beneath the Sao Paulo Plateau and Florianopolis Ridge, joint inversion predicts crustal basement thicknesses between 10-15km with high values of basement density and seismic velocities under the Sao Paulo Plateau which are interpreted as indicating a significant magmatic component within the crustal basement. The Sao Paulo Plateau and Florianopolis Ridge are separated by a thin region of crustal basement beneath the salt interpreted as a regional transtensional structure. Sediment corrected RDAs and gravity derived "synthetic" RDAs are of a similar magnitude on oceanic crust, implying negligible mantle dynamic topography. Gravity inversion, RDA and subsidence analysis along the S Angolan ION-GXT CS1-2400 profile suggests that exhumed mantle, corresponding to a magma poor margin, is absent..The thickness of earliest oceanic crust, derived from gravity and deep seismic reflection data, is approximately 7km consistent with the global average oceanic crustal thicknesses. The joint inversion predicts a small difference between oceanic and continental crustal basement density and seismic velocity, with the change in basement density and velocity corresponding to the COB independently determined from RDA and subsidence analysis. The difference between the sediment corrected RDA and that predicted from gravity inversion crustal thickness variation implies that this margin is experiencing approximately 500m of anomalous uplift attributed to mantle dynamic uplift.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mascle, J.; Blarez, E.

    The authors present a marine study of the eastern Ivory Coast-Ghana continental margins which they consider one of the most spectacular extinct transform margins. This margin has been created during Early-Lower Cretaceous time and has not been submitted to any major geodynamic reactivation since its fabric. Based on this example, they propose to consider during the evolution of the transform margin four main and successive stages. Shearing contact is first active between two probably thick continental crusts and then between progressively thinning continental crusts. This leads to the creation of specific geological structures such as pull-apart graben, elongated fault lineaments,more » major fault scarps, shear folds, and marginal ridges. After the final continental breakup, a hot center (the mid-oceanic ridge axis) is progressively drifting along the newly created margin. The contact between two lithospheres of different nature should necessarily induce, by thermal exchanges, vertical crustal readjustments. Finally, the transform margin remains directly adjacent to a hot but cooling oceanic lithosphere; its subsidence behavior should then progressively be comparable to the thermal subsidence of classic rifted margins.« less

  4. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  5. Is the Pauli exclusion principle the origin of electron localisation?

    NASA Astrophysics Data System (ADS)

    Rincón, Luis; Torres, F. Javier; Almeida, Rafael

    2018-03-01

    In this work, we inquire into the origins of the electron localisation as obtained from the information content of the same-spin pair density, γσ, σ(r2∣r1). To this end, we consider systems of non-interacting and interacting identical Fermions contained in two simple 1D potential models: (1) an infinite potential well and (2) the Kronig-Penney periodic potential. The interparticle interaction is considered through the Hartree-Fock approximation as well as the configuration interaction expansion. Morover, the electron localisation is described through the Kullback-Leibler divergence between γσ, σ(r2∣r1) and its associated marginal probability. The results show that, as long as the adopted method properly includes the Pauli principle, the electronic localisation depends only modestly on the interparticle interaction. In view of the latter, one may conclude that the Pauli principle is the main responsible for the electron localisation.

  6. Searching for Abrupt Circulation Shifts in Marine Isotope Stage 2 and 3

    NASA Astrophysics Data System (ADS)

    Henry, L. E.; Lynch-Stieglitz, J.; Schmidt, M. W.

    2008-12-01

    During Marine Isotope Stage 3, DO events were recorded in the Greenland ice cores and North Atlantic Ocean sediment records. Some cold DO stadials have been associated with massive freshwater inputs, termed Heinrich Events. These Heinrich Events are frequently associated with "drop dead" circulation periods in which the production of North Atlantic Deep Water is greatly diminished. DO events are thought to result from a restructuring of the overturning circulation. We explore these proposed changes in Atlantic Ocean circulation by examining changes in seawater density in the Florida Straits. The density is inferred from the δ18O of the benthic foraminifera C. pachyderma and P. ariminensis taken from core-sites on the Florida and Greater Bahamas Bank margins. The flow through the Florida Straits is in near- geostrophic balance. This means that the vertical shear in the current is reflected in a strong density gradient across the Straits. During the Younger Dryas and the Last Glacial Maximum the density gradient was reduced consistent with weaker flow through the Straits at these times. A weakening of the Florida Current would be expected if the large scale Atlantic Meridional Overturning Circulation weakened, as has been proposed based on other studies. The Younger Dyras event manifests itself as well-correlated decreases in δ18O from the cores on the Florida margin, while their counterparts taken from the Bahamas remain relatively stable when adjusted for global ice volume. Here, we will present data extending back 32kyr, focusing on those cores taken from the Florida Margin which can resolve millennial scale changes during Marine Isotope Stage 2 and Late Stage 3. We will examine the relationship between circulation changes, as reflected in Florida Margin density, and the three most recent Heinrich events, as well as the most recent DO events.

  7. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  8. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  9. Passive microrheology of normal and cancer cells after ML7 treatment by atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyapunova, Elena, E-mail: lyapunova@icmm.ru; Ural Federal University, Kuibyishev Str. 48, Ekaterinburg, 620000; Nikituk, Alexander, E-mail: nas@icmm.ru

    Mechanical properties of living cancer and normal thyroidal cells were investigated by atomic force microscopy (AFM). Cell mechanics was compared before and after treatment with ML7, which is known to reduce myosin activity and induce softening of cell structures. We recorded force curves with extended dwell time of 6 seconds in contact at maximum forces from 500 pN to 1 nN. Data were analyzed within different frameworks: Hertz fit was applied in order to evaluate differences in Young’s moduli among cell types and conditions, while the fluctuations of the cantilever in contact with cells were analyzed with both conventional algorithmsmore » (probability density function and power spectral density) and multifractal detrended fluctuation analysis (MF-DFA). We found that cancer cells were softer than normal cells and ML7 had a substantial softening effect on normal cells, but only a marginal one on cancer cells. Moreover, we observed that all recorded signals for normal and cancer cells were monofractal with small differences between their scaling parameters. Finally, the applicability of wavelet-based methods of data analysis for the discrimination of different cell types is discussed.« less

  10. Quantifying Melt Ponds in the Beaufort MIZ using Linear Support Vector Machines from High Resolution Panchromatic Images

    NASA Astrophysics Data System (ADS)

    Ortiz, M.; Graber, H. C.; Wilkinson, J.; Nyman, L. M.; Lund, B.

    2017-12-01

    Much work has been done on determining changes in summer ice albedo and morphological properties of melt ponds such as depth, shape and distribution using in-situ measurements and satellite-based sensors. Although these studies have dedicated much pioneering work in this area, there still lacks sufficient spatial and temporal scales. We present a prototype algorithm using Linear Support Vector Machines (LSVMs) designed to quantify the evolution of melt pond fraction from a recently government-declassified high-resolution panchromatic optical dataset. The study area of interest lies within the Beaufort marginal ice zone (MIZ), where several in-situ instruments were deployed by the British Antarctic Survey in joint with the MIZ Program, from April-September, 2014. The LSVM uses four dimensional feature data from the intensity image itself, and from various textures calculated from a modified first-order histogram technique using probability density of occurrences. We explore both the temporal evolution of melt ponds and spatial statistics such as pond fraction, pond area, and number pond density, to name a few. We also introduce a linear regression model that can potentially be used to estimate average pond area by ingesting several melt pond statistics and shape parameters.

  11. Experimental validation of the van Herk margin formula for lung radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ecclestone, Gillian; Heath, Emily; Bissonnette, Jean-Pierre

    2013-11-15

    Purpose: To validate the van Herk margin formula for lung radiation therapy using realistic dose calculation algorithms and respiratory motion modeling. The robustness of the margin formula against variations in lesion size, peak-to-peak motion amplitude, tissue density, treatment technique, and plan conformity was assessed, along with the margin formula assumption of a homogeneous dose distribution with perfect plan conformity.Methods: 3DCRT and IMRT lung treatment plans were generated within the ORBIT treatment planning platform (RaySearch Laboratories, Sweden) on 4DCT datasets of virtual phantoms. Random and systematic respiratory motion induced errors were simulated using deformable registration and dose accumulation tools available withinmore » ORBIT for simulated cases of varying lesion sizes, peak-to-peak motion amplitudes, tissue densities, and plan conformities. A detailed comparison between the margin formula dose profile model, the planned dose profiles, and penumbra widths was also conducted to test the assumptions of the margin formula. Finally, a correction to account for imperfect plan conformity was tested as well as a novel application of the margin formula that accounts for the patient-specific motion trajectory.Results: The van Herk margin formula ensured full clinical target volume coverage for all 3DCRT and IMRT plans of all conformities with the exception of small lesions in soft tissue. No dosimetric trends with respect to plan technique or lesion size were observed for the systematic and random error simulations. However, accumulated plans showed that plan conformity decreased with increasing tumor motion amplitude. When comparing dose profiles assumed in the margin formula model to the treatment plans, discrepancies in the low dose regions were observed for the random and systematic error simulations. However, the margin formula respected, in all experiments, the 95% dose coverage required for planning target volume (PTV) margin derivation, as defined by the ICRU; thus, suitable PTV margins were estimated. The penumbra widths calculated in lung tissue for each plan were found to be very similar to the 6.4 mm value assumed by the margin formula model. The plan conformity correction yielded inconsistent results which were largely affected by image and dose grid resolution while the trajectory modified PTV plans yielded a dosimetric benefit over the standard internal target volumes approach with up to a 5% decrease in the V20 value.Conclusions: The margin formula showed to be robust against variations in tumor size and motion, treatment technique, plan conformity, as well as low tissue density. This was validated by maintaining coverage of all of the derived PTVs by 95% dose level, as required by the formal definition of the PTV. However, the assumption of perfect plan conformity in the margin formula derivation yields conservative margin estimation. Future modifications to the margin formula will require a correction for plan conformity. Plan conformity can also be improved by using the proposed trajectory modified PTV planning approach. This proves especially beneficial for tumors with a large anterior–posterior component of respiratory motion.« less

  12. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  13. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions

    PubMed Central

    Storkel, Holly L.; Lee, Jaehoon; Cox, Casey

    2016-01-01

    Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276

  14. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions.

    PubMed

    Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey

    2016-11-01

    Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.

  15. Positive margins prediction in breast cancer conservative surgery: Assessment of a preoperative web-based nomogram.

    PubMed

    Alves-Ribeiro, Lídia; Osório, Fernando; Amendoeira, Isabel; Fougo, José Luís

    2016-08-01

    Margin status of the surgical specimen has been shown to be a prognostic and risk factor for local recurrence in breast cancer surgery. It has been studied as a topic of intervention to diminish reoperation rates and reduce the probability of local recurrence in breast conservative surgery (BCS). This study aims to validate the Dutch BreastConservation! nomogram, created by Pleijhus et al., which predicts preoperative probability of positive margins in BCS. Patients with diagnosis of breast cancer stages cT1-2, who underwent BCS at the Breast Center of São João University Hospital (BC-CHSJ) in 2013-2014, were included. Association and correlation were evaluated for clinical, radiological, pathological and surgical variables. Multivariable logistic regression and ROC curves were used to assess nomogram parameters and discrimination. In our series of 253 patients, no associations were found between margin status and other studied variables (such as age or family history of breast cancer), except for weight (p-value = 0.045) and volume (p-value = 0.012) of the surgical specimen. Regarding the nomogram, a statistically significant association was shown between cN1 status and positive margins (p-value = 0.014). No differences were registered between the scores of patients with positive versus negative margins. Discrimination analysis showed an AUC of 0.474 for the basic and 0.508 for the expanded models. We cannot assume its external validation or its applicability to our cohort. Further studies are needed to determine the validity of this nomogram and achieve a broader view of currently available tools. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. US refining margin trend: austerity continues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Should crude oil prices hold near current levels in 1988, US refining margins might improve little, if at all. If crude oil prices rise, margins could blush pink or worse. If they drop, US refiners would still probably not see much margin improvement. In fact, if crude prices fall, they could set off another free fall in products markets and threaten refiner survival. Volatility in refined products markets and low product demand growth are the underlying reasons for caution or pessimism as the new year approaches. Recent directional patterns in refining margins are scrutinized in this issue. This issue alsomore » contains the following: (1) the ED refining netback data for the US Gulf and West Coasts, Rotterdam, and Singapore for late November, 1987; and (2) the ED fuel price/tax series for countries of the Eastern Hemisphere, November, 1987 edition. 4 figures, 6 tables.« less

  17. Predictions from star formation in the multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bousso, Raphael; Leichenauer, Stefan

    2010-03-15

    We compute trivariate probability distributions in the landscape, scanning simultaneously over the cosmological constant, the primordial density contrast, and spatial curvature. We consider two different measures for regulating the divergences of eternal inflation, and three different models for observers. In one model, observers are assumed to arise in proportion to the entropy produced by stars; in the others, they arise at a fixed time (5 or 10x10{sup 9} years) after star formation. The star formation rate, which underlies all our observer models, depends sensitively on the three scanning parameters. We employ a recently developed model of star formation in themore » multiverse, a considerable refinement over previous treatments of the astrophysical and cosmological properties of different pocket universes. For each combination of observer model and measure, we display all single and bivariate probability distributions, both with the remaining parameter(s) held fixed and marginalized. Our results depend only weakly on the observer model but more strongly on the measure. Using the causal diamond measure, the observed parameter values (or bounds) lie within the central 2{sigma} of nearly all probability distributions we compute, and always within 3{sigma}. This success is encouraging and rather nontrivial, considering the large size and dimension of the parameter space. The causal patch measure gives similar results as long as curvature is negligible. If curvature dominates, the causal patch leads to a novel runaway: it prefers a negative value of the cosmological constant, with the smallest magnitude available in the landscape.« less

  18. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  19. High-Density Near-Field Optical Disc Recording

    NASA Astrophysics Data System (ADS)

    Shinoda, Masataka; Saito, Kimihiro; Ishimoto, Tsutomu; Kondo, Takao; Nakaoki, Ariyoshi; Ide, Naoki; Furuki, Motohiro; Takeda, Minoru; Akiyama, Yuji; Shimouma, Takashi; Yamamoto, Masanobu

    2005-05-01

    We developed a high-density near-field optical recording disc system using a solid immersion lens. The near-field optical pick-up consists of a solid immersion lens with a numerical aperture of 1.84. The laser wavelength for recording is 405 nm. In order to realize the near-field optical recording disc, we used a phase-change recording media and a molded polycarbonate substrate. A clear eye pattern of 112 GB capacity with 160 nm track pitch and 50 nm bit length was observed. The equivalent areal density is 80.6 Gbit/in2. The bottom bit error rate of 3 tracks-write was 4.5× 10-5. The readout power margin and the recording power margin were ± 30.4% and ± 11.2%, respectively.

  20. Ecological and evolutionary processes at expanding range margins.

    PubMed

    Thomas, C D; Bodsworth, E J; Wilson, R J; Simmons, A D; Davies, Z G; Musche, M; Conradt, L

    2001-05-31

    Many animals are regarded as relatively sedentary and specialized in marginal parts of their geographical distributions. They are expected to be slow at colonizing new habitats. Despite this, the cool margins of many species' distributions have expanded rapidly in association with recent climate warming. We examined four insect species that have expanded their geographical ranges in Britain over the past 20 years. Here we report that two butterfly species have increased the variety of habitat types that they can colonize, and that two bush cricket species show increased fractions of longer-winged (dispersive) individuals in recently founded populations. Both ecological and evolutionary processes are probably responsible for these changes. Increased habitat breadth and dispersal tendencies have resulted in about 3- to 15-fold increases in expansion rates, allowing these insects to cross habitat disjunctions that would have represented major or complete barriers to dispersal before the expansions started. The emergence of dispersive phenotypes will increase the speed at which species invade new environments, and probably underlies the responses of many species to both past and future climate change.

  1. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  2. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  3. Mechanistic modelling of Middle Eocene atmospheric carbon dioxide using fossil plant material

    NASA Astrophysics Data System (ADS)

    Grein, Michaela; Roth-Nebelsick, Anita; Wilde, Volker; Konrad, Wilfried; Utescher, Torsten

    2010-05-01

    Various proxies (such as pedogenic carbonates, boron isotopes or phytoplankton) and geochemical models were applied in order to reconstruct palaeoatmospheric carbon dioxide, partially providing conflicting results. Another promising proxy is the frequency of stomata (pores on the leaf surface used for gaseous exchange). In this project, fossil plant material from the Messel Pit (Hesse, Germany) is used to reconstruct atmospheric carbon dioxide concentration in the Middle Eocene by analyzing stomatal density. We applied the novel mechanistic-theoretical approach of Konrad et al. (2008) which provides a quantitative derivation of the stomatal density response (number of stomata per leaf area) to varying atmospheric carbon dioxide concentration. The model couples 1) C3-photosynthesis, 2) the process of diffusion and 3) an optimisation principle providing maximum photosynthesis (via carbon dioxide uptake) and minimum water loss (via stomatal transpiration). These three sub-models also include data of the palaeoenvironment (temperature, water availability, wind velocity, atmospheric humidity, precipitation) and anatomy of leaf and stoma (depth, length and width of stomatal porus, thickness of assimilation tissue, leaf length). In order to calculate curves of stomatal density as a function of atmospheric carbon dioxide concentration, various biochemical parameters have to be borrowed from extant representatives. The necessary palaeoclimate data are reconstructed from the whole Messel flora using Leaf Margin Analysis (LMA) and the Coexistence Approach (CA). In order to obtain a significant result, we selected three species from which a large number of well-preserved leaves is available (at least 20 leaves per species). Palaeoclimate calculations for the Middle Eocene Messel Pit indicate a warm and humid climate with mean annual temperature of approximately 22°C, up to 2540 mm mean annual precipitation and the absence of extended periods of drought. Mean relative air humidity was probably rather high, up to 77%. The combined results of the three selected plant taxa indicate values for atmospheric carbon dioxide concentration between 700 and 1100 ppm (probably about 900 ppm). Reference: Konrad, W., Roth-Nebelsick, A., Grein, M. (2008). Modelling of stomatal density response to atmospheric CO2. Journal of Theoretical Biology 253(4): 638-658.

  4. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  5. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  6. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  7. The effects of particle size, shape, density and flow characteristics on particle margination to vascular walls in cardiovascular diseases.

    PubMed

    Ta, Hang T; Truong, Nghia P; Whittaker, Andrew K; Davis, Thomas P; Peter, Karlheinz

    2018-01-01

    Vascular-targeted drug delivery is a promising approach for the treatment of atherosclerosis, due to the vast involvement of endothelium in the initiation and growth of plaque, a characteristic of atherosclerosis. One of the major challenges in carrier design for targeting cardiovascular diseases (CVD) is that carriers must be able to navigate the circulation system and efficiently marginate to the endothelium in order to interact with the target receptors. Areas covered: This review draws on studies that have focused on the role of particle size, shape, and density (along with flow hemodynamics and hemorheology) on the localization of the particles to activated endothelial cell surfaces and vascular walls under different flow conditions, especially those relevant to atherosclerosis. Expert opinion: Generally, the size, shape, and density of a particle affect its adhesion to vascular walls synergistically, and these three factors should be considered simultaneously when designing an optimal carrier for targeting CVD. Available preliminary data should encourage more studies to be conducted to investigate the use of nano-constructs, characterized by a sub-micrometer size, a non-spherical shape, and a high material density to maximize vascular wall margination and minimize capillary entrapment, as carriers for targeting CVD.

  8. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-07

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.

  9. Corneal inflammatory events with daily silicone hydrogel lens wear.

    PubMed

    Szczotka-Flynn, Loretta; Jiang, Ying; Raghupathy, Sangeetha; Bielefeld, Roger A; Garvey, Matthew T; Jacobs, Michael R; Kern, Jami; Debanne, Sara M

    2014-01-01

    This study aimed to determine the probability and risk factors for developing a corneal inflammatory event (CIE) during daily wear of lotrafilcon A silicone hydrogel contact lenses. Eligible participants (n = 218) were fit with lotrafilcon A lenses for daily wear and followed up for 12 months. Participants were randomized to either a polyhexamethylene biguanide-preserved multipurpose solution or a one-step peroxide disinfection system. The main exposures of interest were bacterial contamination of lenses, cases, lid margins, and ocular surface. Kaplan-Meier (KM) plots were used to estimate the cumulative unadjusted probability of remaining free from a CIE, and multivariate Cox proportional hazards regression was used to model the hazard of experiencing a CIE. The KM unadjusted cumulative probability of remaining free from a CIE for both lens care groups combined was 92.3% (95% confidence interval [CI], 88.1 to 96.5%). There was one participant with microbial keratitis, five participants with asymptomatic infiltrates, and seven participants with contact lens peripheral ulcers, providing KM survival estimates of 92.8% (95% CI, 88.6 to 96.9%) and 98.1% (95% CI, 95.8 to 100.0%) for remaining free from noninfectious and symptomatic CIEs, respectively. The presence of substantial (>100 colony-forming units) coagulase-negative staphylococci bioburden on lid margins was associated with about a five-fold increased risk for the development of a CIE (p = 0.04). The probability of experiencing a CIE during daily wear of lotrafilcon A contact lenses is low, and symptomatic CIEs are rare. Patient factors, such as high levels of bacterial bioburden on lid margins, contribute to the development of noninfectious CIEs during daily wear of silicone hydrogel lenses.

  10. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  11. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  12. Deep structure of the Mid-Norwegian continental margin (the Vøring and Møre basins) according to 3-D density and magnetic modelling

    NASA Astrophysics Data System (ADS)

    Maystrenko, Yuriy Petrovich; Gernigon, Laurent; Nasuti, Aziz; Olesen, Odleiv

    2018-03-01

    A lithosphere-scale 3-D density/magnetic structural model of the Møre and Vøring segments of the Mid-Norwegian continental margin and the adjacent areas of the Norwegian mainland has been constructed by using both published, publically available data sets and confidential data, validated by the 3-D density and magnetic modelling. The obtained Moho topography clearly correlates with the major tectonic units of the study area where a deep Moho corresponds to the base of the Precambrian continental crust and the shallower one is located in close proximity to the younger oceanic lithospheric domain. The 3-D density modelling agrees with previous studies which indicate the presence of a high-density/high-velocity lower-crustal layer beneath the Mid-Norwegian continental margin. The broad Jan Mayen Corridor gravity low is partially related to the decreasing density of the sedimentary layers within the Jan Mayen Corridor and also has to be considered in relation to a possible low-density composition- and/or temperature-related zone in the lithospheric mantle. According to the results of the 3-D magnetic modelling, the absence of a strong magnetic anomaly over the Utgard High indicates that the uplifted crystalline rocks are not so magnetic there, supporting a suggestion that the entire crystalline crust has a low magnetization beneath the greater part of the Vøring Basin and the northern part of the Møre Basin. On the contrary, the crystalline crust is much more magnetic beneath the Trøndelag Platform, the southern part of the Møre Basin and within the mainland, reaching a culmination at the Frøya High where the most intensive magnetic anomaly is observed within the study area.

  13. Stochastic mechanics of reciprocal diffusions

    NASA Astrophysics Data System (ADS)

    Levy, Bernard C.; Krener, Arthur J.

    1996-02-01

    The dynamics and kinematics of reciprocal diffusions were examined in a previous paper [J. Math. Phys. 34, 1846 (1993)], where it was shown that reciprocal diffusions admit a chain of conservation laws, which close after the first two laws for two disjoint subclasses of reciprocal diffusions, the Markov and quantum diffusions. For the case of quantum diffusions, the conservation laws are equivalent to Schrödinger's equation. The Markov diffusions were employed by Schrödinger [Sitzungsber. Preuss. Akad. Wiss. Phys. Math Kl. 144 (1931); Ann. Inst. H. Poincaré 2, 269 (1932)], Nelson [Dynamical Theories of Brownian Motion (Princeton University, Princeton, NJ, 1967); Quantum Fluctuations (Princeton University, Princeton, NJ, 1985)], and other researchers to develop stochastic formulations of quantum mechanics, called stochastic mechanics. We propose here an alternative version of stochastic mechanics based on quantum diffusions. A procedure is presented for constructing the quantum diffusion associated to a given wave function. It is shown that quantum diffusions satisfy the uncertainty principle, and have a locality property, whereby given two dynamically uncoupled but statistically correlated particles, the marginal statistics of each particle depend only on the local fields to which the particle is subjected. However, like Wigner's joint probability distribution for the position and momentum of a particle, the finite joint probability densities of quantum diffusions may take negative values.

  14. Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit

    2016-07-01

    A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.

  15. Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandhu, Rimple; Poirel, Dominique; Pettit, Chris

    2016-07-01

    A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less

  16. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    NASA Astrophysics Data System (ADS)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  17. Fish assemblage, density, and growth in lateral habitats within natural and regulated sections of Washington's Elwha River prior to dam removal

    USGS Publications Warehouse

    Connolly, P.J.; Brenkman, S.J.

    2008-01-01

    We characterized seasonal fish assemblage, relative density, and growth in river margins above and between two Elwha River dams scheduled for removal. Fish assemblage and relative density differed in the lateral habitats of the middle-regulated and upper-unregulated sections of the Elwha River. Rainbow trout was the numerically dominant salmonid in both sections, with bull trout present in low numbers. Sculpin were common in the middle section, but not detected in the upper section. In 2004, mean length and biomass of age-0 rainbow trout were significantly smaller in the middle section than in the upper section by the end of the growing season (September). In 2005, an earlier emergence of rainbow trout in the middle section (July) compared to the upper section (August) corresponded with warmer water temperatures in the middle section. Despite lower growth, the margins of mainstem units in the middle section supported higher mean areal densities and biomass of age-0 rainbow trout than the up-per section. These results suggest that growth performance of age-0 rainbow trout was lower in the middle section than in the upper section, which could have been a density-dependent response, or a result of poor food production in the sediment-starved regulated section, or both. Based on our findings, we believe that seasonal sampling of river margins within reference reaches is a cost effective and repeatable method for detection of biologically important short- and long-term changes in emergence timing, density, and growth of rainbow trout before and after dam removals in the Elwha River.

  18. Evaluating detection probabilities for American marten in the Black Hills, South Dakota

    USGS Publications Warehouse

    Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.

    2007-01-01

    Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.

  19. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  20. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  1. Joint Bayesian Estimation of Quasar Continua and the Lyα Forest Flux Probability Distribution Function

    NASA Astrophysics Data System (ADS)

    Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan

    2017-08-01

    We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.

  2. Modelling Spatial Dependence Structures Between Climate Variables by Combining Mixture Models with Copula Models

    NASA Astrophysics Data System (ADS)

    Khan, F.; Pilz, J.; Spöck, G.

    2017-12-01

    Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.

  3. On the uncertainty in single molecule fluorescent lifetime and energy emission measurements

    NASA Technical Reports Server (NTRS)

    Brown, Emery N.; Zhang, Zhenhua; Mccollom, Alex D.

    1995-01-01

    Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least square methods agree and are optimal when the number of detected photons is large however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67% of those can be noise and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous poisson processes, we derive the exact joint arrival time probably density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. the ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background nose and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.

  4. On the Uncertainty in Single Molecule Fluorescent Lifetime and Energy Emission Measurements

    NASA Technical Reports Server (NTRS)

    Brown, Emery N.; Zhang, Zhenhua; McCollom, Alex D.

    1996-01-01

    Time-correlated single photon counting has recently been combined with mode-locked picosecond pulsed excitation to measure the fluorescent lifetimes and energy emissions of single molecules in a flow stream. Maximum likelihood (ML) and least squares methods agree and are optimal when the number of detected photons is large, however, in single molecule fluorescence experiments the number of detected photons can be less than 20, 67 percent of those can be noise, and the detection time is restricted to 10 nanoseconds. Under the assumption that the photon signal and background noise are two independent inhomogeneous Poisson processes, we derive the exact joint arrival time probability density of the photons collected in a single counting experiment performed in the presence of background noise. The model obviates the need to bin experimental data for analysis, and makes it possible to analyze formally the effect of background noise on the photon detection experiment using both ML or Bayesian methods. For both methods we derive the joint and marginal probability densities of the fluorescent lifetime and fluorescent emission. The ML and Bayesian methods are compared in an analysis of simulated single molecule fluorescence experiments of Rhodamine 110 using different combinations of expected background noise and expected fluorescence emission. While both the ML or Bayesian procedures perform well for analyzing fluorescence emissions, the Bayesian methods provide more realistic measures of uncertainty in the fluorescent lifetimes. The Bayesian methods would be especially useful for measuring uncertainty in fluorescent lifetime estimates in current single molecule flow stream experiments where the expected fluorescence emission is low. Both the ML and Bayesian algorithms can be automated for applications in molecular biology.

  5. Seismic potential for large and great interplate earthquakes along the Chilean and Southern Peruvian Margins of South America: A quantitative reappraisal

    NASA Astrophysics Data System (ADS)

    Nishenko, Stuart P.

    1985-04-01

    The seismic potential of the Chilean and southern Peruvian margins of South America is reevaluated to delineate those areas or segments of the margin that may be expected to experience large or great interplate earthquakes within the next 20 years (1984-2004). Long-term estimates of seismic potential (or the conditional probability of recurrence within a specified period of time) are based on (1) statistical analysis of historic repeat time data using Weibull distributions and (2) deterministic estimates of recurrence times based on the time-predictable model of earthquake recurrence. Both methods emphasize the periodic nature of large and great earthquake recurrence, and are compared with estimates of probability based on the assumption of Poisson-type behavior. The estimates of seismic potential presented in this study are long-term forecasts only, as the temporal resolution (or standard deviation) of both methods is taken to range from ±15% to ±25% of the average or estimated repeat time. At present, the Valparaiso region of central Chile (32°-35°S) has a high potential or probability of recurrence in the next 20 years. Coseismic uplift data associated with previous shocks in 1822 and 1906 suggest that this area may have already started to rerupture in 1971-1973. Average repeat times also suggest this area is due for a great shock within the next 20 years. Flanking segments of the Chilean margin, Coquimbo-Illapel (30°-32°S) and Talca-Concepcion (35°-38°S), presently have poorly constrained but possibly quite high potentials for a series of large or great shocks within the next 20 years. In contrast, the rupture zone of the great 1960 earthquake (37°-46°S) has the lowest potential along the margin and is not expected to rerupture in a great earthquake within the next 100 years. In the north, the seismic potentials of the Mollendo-Arica (17°-18°S) and Arica-Antofagasta (18°-24°S) segments (which last ruptured during great earthquakes in 1868 and 1877) are also high, but poorly constrained.

  6. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  7. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  8. How the fluctuations of water levels affect populations of invasive bivalve Corbicula fluminea (Müller, 1774) in a Neotropical reservoir?

    PubMed

    Paschoal, L R P; Andrade, D P; Darrigran, G

    2015-01-01

    Corbicula fluminea is an invasive bivalve responsible for several environmental and financial problems around the globe. Despite the invasive potential of this species, it suffers certain restrictions in lentic environments due to natural phenomena that significantly affect its population structure (e.g. water column fluctuation and sunlight exposure). The present study addresses how temporal decline of the water level in a Neotropical reservoir and exposure to sunlight affect the population structure of C. fluminea. Samplings were carried out twice in the reservoir of Furnas Hydroelectric Power Station (HPS) (Minas Gerais, Brazil), in 2011 and 2012. Population density, spatial distribution and mean shell length of C. fluminea were estimated for each year after sampling in 51 quadrats (0.0625m2) placed on three transects at different distances along the reservoir margins (0, 10 and 20 m from a fixed-point). We observed a predominance of C. fluminea in both years, with a simultaneous gradual decrease in density and richness of native species in the sampling area. Significant differences in density of C. fluminea were registered at different distances from the margin, and are related to the temporal variability of physical conditions of the sediment and water in these environments. We also registered a trend toward an increase in the density and aggregation of C. fluminea as we moved away from the margin, due to the greater stability of these areas (>10 m). The mean shell length of C. fluminea showed significant difference between the distinct distances from the margin and during the years, as well as the interaction of these factors (Distances vs.Years). These results were associated with the reproductive and invasive capacity of this species. This study reveals that these temporal events (especially water column fluctuation) may cause alterations in density, spatial distribution and mean shell length of C. fluminea and the composition of the native malacofauna in Neotropical lentic environments.

  9. Appearance of De Geer moraines in southern and western Finland - Implications for reconstructing glacier retreat dynamics

    NASA Astrophysics Data System (ADS)

    Ojala, Antti E. K.

    2016-02-01

    LiDAR digital elevation models (DEMs) from southern and western Finland were investigated to map and discriminate features of De Geer moraines, sparser and more scattered end moraines, and larger end moraine features (i.e., ice-marginal complexes). The results indicate that the occurrence and distribution of De Geer moraines and scattered end moraine ridges in Finland are more widespread than previously suggested. This is probably attributed to the ease of detecting and mapping these features with high-resolution DEMs, indicating the efficiency of LiDAR applications in geological and geomorphological studies. The variable appearance and distribution of moraine ridges in Finland support previous interpretations that no single model is likely to be appropriate for the genesis of De Geer moraines at all localities and for all types of end moraines. De Geer moraine appearances and interdistances probably result from a combination of the general rapidity of ice-margin recession during deglaciation, the proglacial water depth in which they were formed, and local glacier dynamics related to climate and terrain topography. The correlation between the varved clay-based rate of deglaciation and interdistances of distinct and regularly spaced De Geer moraine ridges indicates that the rate of deglaciation is probably involved in the De Geer ridge-forming process, but more thorough comparisons are needed to understand the extent to which De Geer interdistances represent an annual rate of ice-margin decay and the rapidity of regional deglaciation.

  10. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  11. Multi-hazard Assessment and Scenario Toolbox (MhAST): A Framework for Analyzing Compounding Effects of Multiple Hazards

    NASA Astrophysics Data System (ADS)

    Sadegh, M.; Moftakhari, H.; AghaKouchak, A.

    2017-12-01

    Many natural hazards are driven by multiple forcing variables, and concurrence/consecutive extreme events significantly increases risk of infrastructure/system failure. It is a common practice to use univariate analysis based upon a perceived ruling driver to estimate design quantiles and/or return periods of extreme events. A multivariate analysis, however, permits modeling simultaneous occurrence of multiple forcing variables. In this presentation, we introduce the Multi-hazard Assessment and Scenario Toolbox (MhAST) that comprehensively analyzes marginal and joint probability distributions of natural hazards. MhAST also offers a wide range of scenarios of return period and design levels and their likelihoods. Contribution of this study is four-fold: 1. comprehensive analysis of marginal and joint probability of multiple drivers through 17 continuous distributions and 26 copulas, 2. multiple scenario analysis of concurrent extremes based upon the most likely joint occurrence, one ruling variable, and weighted random sampling of joint occurrences with similar exceedance probabilities, 3. weighted average scenario analysis based on a expected event, and 4. uncertainty analysis of the most likely joint occurrence scenario using a Bayesian framework.

  12. System Guidelines for EMC Safety-Critical Circuits: Design, Selection, and Margin Demonstration

    NASA Technical Reports Server (NTRS)

    Lawton, R. M.

    1996-01-01

    Demonstration of required safety margins on critical electrical/electronic circuits in large complex systems has become an implementation and cost problem. These margins are the difference between the activation level of the circuit and the electrical noise on the circuit in the actual operating environment. This document discusses the origin of the requirement and gives a detailed process flow for the identification of the system electromagnetic compatibility (EMC) critical circuit list. The process flow discusses the roles of engineering disciplines such as systems engineering, safety, and EMC. Design and analysis guidelines are provided to assist the designer in assuring the system design has a high probability of meeting the margin requirements. Examples of approaches used on actual programs (Skylab and Space Shuttle Solid Rocket Booster) are provided to show how variations of the approach can be used successfully.

  13. Faunal responses to oxygen gradients on the Pakistan margin: A comparison of foraminiferans, macrofauna and megafauna

    NASA Astrophysics Data System (ADS)

    Gooday, A. J.; Levin, L. A.; Aranda da Silva, A.; Bett, B. J.; Cowie, G. L.; Dissard, D.; Gage, J. D.; Hughes, D. J.; Jeffreys, R.; Lamont, P. A.; Larkin, K. E.; Murty, S. J.; Schumacher, S.; Whitcraft, C.; Woulds, C.

    2009-03-01

    The Pakistan Margin is characterised by a strong mid-water oxygen minimum zone (OMZ) that intercepts the seabed at bathyal depths (150-1300 m). We investigated whether faunal abundance and diversity trends were similar among protists (foraminiferans and gromiids), metazoan macrofauna and megafauna along a transect (140-1850 m water depth) across the OMZ during the 2003 intermonsoon (March-May) and late/post-monsoon (August-October) seasons. All groups exhibited some drop in abundance in the OMZ core (250-500 m water depth; O 2: 0.10-0.13 mL L -1=4.46-5.80 μM) but to differing degrees. Densities of foraminiferans >63 μm were slightly depressed at 300 m, peaked at 738 m, and were much lower at deeper stations. Foraminiferans >300 μm were the overwhelmingly dominant macrofaunal organisms in the OMZ core. Macrofaunal metazoans reached maximum densities at 140 m depth, with additional peaks at 850, 940 and 1850 m where foraminiferans were less abundant. The polychaete Linopherus sp. was responsible for a macrofaunal biomass peak at 950 m. Apart from large swimming animals (fish and natant decapods), metazoan megafauna were absent between 300 and 900 m (O 2 <0.14-0.15 mL L -1=6.25-6.69 μM) but were represented by a huge, ophiuroid-dominated abundance peak at 1000 m (O 2 ˜0.15-0.18 mL L -1=6.69-8.03 μM). Gromiid protists were confined largely to depths below 1150 m (O 2 >0.2 mL L -1=8.92 μM). The progressively deeper abundance peaks for foraminiferans (>63 μm), Linopherus sp. and ophiuroids probably represent lower OMZ boundary edge effects and suggest a link between body size and tolerance of hypoxia. Macro- and megafaunal organisms collected between 800 and 1100 m were dominated by a succession of different taxa, indicating that the lower part of the OMZ is also a region of rapid faunal change. Species diversity was depressed in all groups in the OMZ core, but this was much more pronounced for macrofauna and megafauna than for foraminiferans. Oxygen levels strongly influenced the taxonomic composition of all faunal groups. Calcareous foraminiferans dominated the seasonally and permanently hypoxic sites (136-300 m); agglutinated foraminiferans were relatively more abundant at deeper stations where oxygen concentrations were >0.13 mL L -1(=5.80 μM). Polychaetes were the main macrofaunal taxon within the OMZ; calcareous macrofauna and megafauna (molluscs and echinoderms) were rare or absent where oxygen levels were lowest. The rarity of larger animals between 300 and 700 m on the Pakistan Margin, compared with the abundant macrofauna in the OMZ core off Oman, is the most notable contrast between the two sides of the Arabian Sea. This difference probably reflects the slightly higher oxygen levels and better food quality on the western side.

  14. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  15. Oligocene to Holocene sediment drifts and bottom currents on the slope of Gabon continental margin (west Africa). Consequences for sedimentation and southeast Atlantic upwelling

    NASA Astrophysics Data System (ADS)

    Séranne, Michel; Nzé Abeigne, César-Rostand

    1999-10-01

    Seismic reflection profiles on the slope of the south Gabon continental margin display furrows 2 km wide and some 200 m deep, that develop normal to the margin in 500-1500 m water depth. Furrows are characterised by an aggradation/progradation pattern which leads to margin-parallel, northwestward migration of their axes through time. These structures, previously interpreted as turbidity current channels, display the distinctive seismic image and internal organisation of sediment drifts, constructed by the activity of bottom currents. Sediment drifts were initiated above a major Oligocene unconformity, and they developed within a Oligocene to Present megasequence of general progradation of the margin, whilst they are markedly absent from the underlying Late Cretaceous-Eocene aggradation megasequence. The presence of upslope migrating sediment waves, and the northwest migration of the sediment drifts indicate deposition by bottom current flowing upslope, under the influence of the Coriolis force. Such landwards-directed bottom currents on the slope probably represent coastal upwelling, which has been active along the west Africa margin throughout the Neogene.

  16. Marginal integrity of resin composite restorations restored with PPD initiatorcontaining resin composite cured by QTH, monowave and polywave LED units.

    PubMed

    Bortolotto, Tissiana; Betancourt, Francisco; Krejci, Ivo

    2016-12-01

    This study evaluated the influence of curing devices on marginal adaptation of cavities restored with self-etching adhesive containing CQ and PPD initiators and hybrid composite. Twenty-four class V (3 groups, n=8) with margins located on enamel and dentin were restored with Clearfil S3 Bond and Clearfil APX PLT, light-cured with a monowave LED, multiwave LED and halogen light-curing unit (LCU). Marginal adaptation was evaluated with SEM before/after thermo-mechanical loading (TML). On enamel, significantly lower % continuous margins (74.5±12.6) were found in group cured by multiwave LED when compared to monowave LED (87.6±9.5) and halogen LCU (94.4±9.1). The presence of enamel and composite fractures was significantly higher in the group light-cured with multiwave LED, probably due to an increased materials' friability resulted from an improved degree of cure. The clinician should aware that due to a distinct activation of both initiators, marginal quality may be influenced on the long-term.

  17. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  18. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  19. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  20. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  1. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  2. Developing Probabilistic Safety Performance Margins for Unknown and Underappreciated Risks

    NASA Technical Reports Server (NTRS)

    Benjamin, Allan; Dezfuli, Homayoon; Everett, Chris

    2015-01-01

    Probabilistic safety requirements currently formulated or proposed for space systems, nuclear reactor systems, nuclear weapon systems, and other types of systems that have a low-probability potential for high-consequence accidents depend on showing that the probability of such accidents is below a specified safety threshold or goal. Verification of compliance depends heavily upon synthetic modeling techniques such as PRA. To determine whether or not a system meets its probabilistic requirements, it is necessary to consider whether there are significant risks that are not fully considered in the PRA either because they are not known at the time or because their importance is not fully understood. The ultimate objective is to establish a reasonable margin to account for the difference between known risks and actual risks in attempting to validate compliance with a probabilistic safety threshold or goal. In this paper, we examine data accumulated over the past 60 years from the space program, from nuclear reactor experience, from aircraft systems, and from human reliability experience to formulate guidelines for estimating probabilistic margins to account for risks that are initially unknown or underappreciated. The formulation includes a review of the safety literature to identify the principal causes of such risks.

  3. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  4. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  5. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  6. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  7. Generalized Tumor Dose for Treatment Planning Decision Support

    NASA Astrophysics Data System (ADS)

    Zuniga, Areli A.

    Modern radiation therapy techniques allow for improved target conformity and normal tissue sparing. These highly conformal treatment plans have allowed dose escalation techniques increasing the probability of tumor control. At the same time this conformation has introduced inhomogeneous dose distributions, making delivered dose characterizations more difficult. The concept of equivalent uniform dose (EUD) characterizes a heterogeneous dose distribution within irradiated structures as a single value and has been used in biologically based treatment planning (BBTP); however, there are no substantial validation studies on clinical outcome data supporting EUD's use and therefore has not been widely adopted as decision-making support. These highly conformal treatment plans have also introduced the need for safety margins around the target volume. These margins are designed to minimize geometrical misses, and to compensate for dosimetric and treatment delivery uncertainties. The margin's purpose is to reduce the chance of tumor recurrence. This dissertation introduces a new EUD formulation designed especially for tumor volumes, called generalized Tumor Dose (gTD). It also investigates, as a second objective, margins extensions for potential improvements in local control while maintaining or minimizing toxicity. The suitability of gTD to rank LC was assessed by means of retrospective studies in a head and neck (HN) squamous cell carcinoma (SCC) and non-small cell lung cancer (NSCLC) cohorts. The formulation was optimized based on two datasets (one of each type) and then, model validation was assessed on independent cohorts. The second objective of this dissertation was investigated by ranking the probability of LC of the primary disease adding different margin sizes. In order to do so, an already published EUD formula was used retrospectively in a HN and a NSCLC datasets. Finally, recommendations for the viability to implement this new formulation into a routine treatment planning process as well as the revision of safety margins to improve local tumor control maximizing normal tissue sparing in SCC of the HN and NSCLC are discussed.

  8. Model selection and Bayesian inference for high-resolution seabed reflection inversion.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2009-02-01

    This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.

  9. Investigating the origins of high multilevel resistive switching in forming free Ti/TiO2-x-based memory devices through experiments and simulations

    NASA Astrophysics Data System (ADS)

    Bousoulas, P.; Giannopoulos, I.; Asenov, P.; Karageorgiou, I.; Tsoukalas, D.

    2017-03-01

    Although multilevel capability is probably the most important property of resistive random access memory (RRAM) technology, it is vulnerable to reliability issues due to the stochastic nature of conducting filament (CF) creation. As a result, the various resistance states cannot be clearly distinguished, which leads to memory capacity failure. In this work, due to the gradual resistance switching pattern of TiO2-x-based RRAM devices, we demonstrate at least six resistance states with distinct memory margin and promising temporal variability. It is shown that the formation of small CFs with high density of oxygen vacancies enhances the uniformity of the switching characteristics in spite of the random nature of the switching effect. Insight into the origin of the gradual resistance modulation mechanisms is gained by the application of a trap-assisted-tunneling model together with numerical simulations of the filament formation physical processes.

  10. Ballistic Performance Model of Crater Formation in Monolithic, Porous Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Miller, J. E.; Christiansen, E. L.; Deighton, K. D.

    2014-01-01

    Porous monolithic ablative systems insulate atmospheric reentry vehicles from reentry plasmas generated by atmospheric braking from orbital and exo-orbital velocities. Due to the necessity that these materials create a temperature gradient up to several thousand Kelvin over their thickness, it is important that these materials are near their pristine state prior to reentry. These materials may also be on exposed surfaces to space environment threats like orbital debris and meteoroids leaving a probability that these exposed surfaces will be below their prescribed values. Owing to the typical small size of impact craters in these materials, the local flow fields over these craters and the ablative process afford some margin in thermal protection designs for these locally reduced performance values. In this work, tests to develop ballistic performance models for thermal protection materials typical of those being used on Orion are discussed. A density profile as a function of depth of a typical monolithic ablator and substructure system is shown in Figure 1a.

  11. Statistical characterization of fluctuations of a laser beam transmitted through a random air-water interface: new results from a laboratory experiment

    NASA Astrophysics Data System (ADS)

    Majumdar, Arun K.; Land, Phillip; Siegenthaler, John

    2014-10-01

    New results for characterizing laser intensity fluctuation statistics of a laser beam transmitted through a random air-water interface relevant to underwater communications are presented. A laboratory watertank experiment is described to investigate the beam wandering effects of the transmitted beam. Preliminary results from the experiment provide information about histograms of the probability density functions of intensity fluctuations for different wind speeds measured by a CMOS camera for the transmitted beam. Angular displacements of the centroids of the fluctuating laser beam generates the beam wander effects. This research develops a probabilistic model for optical propagation at the random air-water interface for a transmission case under different wind speed conditions. Preliminary results for bit-error-rate (BER) estimates as a function of fade margin for an on-off keying (OOK) optical communication through the air-water interface are presented for a communication system where a random air-water interface is a part of the communication channel.

  12. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  13. Sediment movement and dispersal patterns on the Grand Banks continental shelf and slope were tied to the dynamics of the Laurentide ice-sheet margin

    NASA Astrophysics Data System (ADS)

    Rashid, H.; MacKillop, K.; Piper, D.; Vermooten, M.; Higgins, J.; Marche, B.; Langer, K.; Brockway, B.; Spicer, H. E.; Webb, M. D.; Fournier, E.

    2015-12-01

    The expansion and contraction of the late Pleistocene Laurentide ice-sheet (LIS) was the crucial determining factor for the geomorphic features and shelf and slope sediment mobility on the eastern Canadian continental margin, with abundant mass-transport deposits (MTDs) seaward of ice margins on the upper slope. Here, we report for the first time sediment failure and mass-transport deposits from the central Grand Banks slope in the Salar and Carson petroleum basins. High-resolution seismic profiles and multibeam bathymetry show numerous sediment failure scarps in 500-1600 m water depth. There is no evidence for an ice margin on the upper slope younger than MIS 6. Centimeter-scale X-ray fluorescence analysis (XRF), grain size, and oxygen isotope data from piston cores constrain sediment processes over the past 46 ka. Geotechnical measurements including Atterberg limit tests, vane shear measurements and triaxial and multi-stage isotropic consolidation tests allowed us to assess the instability on the continental margin. Cores with continuous undisturbed stratigraphy in contourite silty muds show normal downcore increase in bulk density and undrained peak shear strength. Heinrich (H) layers are identifiable by a marked increase in the bulk density, high Ca (ppm), increase in iceberg-rafted debris and lighter δ18O in the polar planktonic foram Neogloboquadrina pachyderma (sinistral): with a few C-14 dates they provide a robust chronology. There is no evidence for significant supply of sediment from the Grand Banks at the last-glacial maximum. Mass-transport deposits (MTD) are marked by variability in the bulk density, undrained shear strength and little variation in bulk density or Ca (ppm) values. The MTD are older than 46 ka on the central Grand Banks slope, whereas younger MTDs are present in southern Flemish Pass. Factor of safety calculations suggest the slope is statically stable up to gradients of 10°, but more intervals of silty mud may fail during earthquake-induced cyclic loading based on Atterberg tests. By analogy with the Holocene, contourites deposited in MIS 5e may be particularly silty and form a "weak layer" susceptible to failure.

  14. SELF-ORGANIZATION OF RECONNECTING PLASMAS TO MARGINAL COLLISIONALITY IN THE SOLAR CORONA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imada, S.; Zweibel, E. G.

    We explore the suggestions by Uzdensky and Cassak et al. that coronal loops heated by magnetic reconnection should self-organize to a state of marginal collisionality. We discuss their model of coronal loop dynamics with a one-dimensional hydrodynamic calculation. We assume that many current sheets are present, with a distribution of thicknesses, but that only current sheets thinner than the ion skin depth can rapidly reconnect. This assumption naturally causes a density-dependent heating rate which is actively regulated by the plasma. We report nine numerical simulation results of coronal loop hydrodynamics in which the absolute values of the heating rates aremore » different but their density dependences are the same. We find two regimes of behavior, depending on the amplitude of the heating rate. In the case that the amplitude of heating is below a threshold value, the loop is in stable equilibrium. Typically, the upper and less dense part of a coronal loop is collisionlessly heated and conductively cooled. When the amplitude of heating is above the threshold, the conductive flux to the lower atmosphere required to balance collisionless heating drives an evaporative flow which quenches fast reconnection, ultimately cooling and draining the loop until the cycle begins again. The key elements of this cycle are gravity and the density dependence of the heating function. Some additional factors are present, including pressure-driven flows from the loop top, which carry a large enthalpy flux and play an important role in reducing the density. We find that on average the density of the system is close to the marginally collisionless value.« less

  15. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  16. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    ERIC Educational Resources Information Center

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  17. Localization and quantification of carbonic anhydrase activity in the symbiotic Scyphozoan Cassiopea xamachana.

    PubMed

    Estes, Anne M; Kempf, Stephen C; Henry, Raymond P

    2003-06-01

    The relationship between density and location of zooxanthellae and levels of carbonic anhydrase (CA) activity was examined in Cassiopea xamachana. In freshly collected symbiotic animals, high densities of zooxanthellae corresponded with high levels of CA activity in host bell and oral arm tissues. Bleaching resulted in a significant loss of zooxanthellae and CA activity. Recolonization resulted in full restoration of zooxanthellar densities but only partial restoration of CA activity. High levels of CA activity were also seen in structures with inherently higher zooxanthellar densities, such as oral arm tissues. Similarly, the oral epidermal layer of bell tissue had significantly higher zooxanthellar densities and levels of CA activity than did aboral bell tissues. Fluorescent labeling, using 5-dimethylaminonapthalene-1-sulfonamide (DNSA) also reflected this tight-knit relationship between the presence and density of zooxanthellae, as DNSA-CA fluorescence intensity was greatest in host oral epithelial cells directly overlying zooxanthellae. However, the presence and density of zooxanthellae did not always correspond with enzyme activity levels. A transect of bell tissue from the margin to the manubrium revealed a gradient of CA activity, with the highest values at the bell margin and the lowest at the manubrium, despite an even distribution of zooxanthellae. Thus, abiotic factors may also influence the distribution of CA and the levels of CA activity.

  18. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  19. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  20. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  1. Simple gain probability functions for large reflector antennas of JPL/NASA

    NASA Technical Reports Server (NTRS)

    Jamnejad, V.

    2003-01-01

    Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.

  2. Poster - 36: Effect of Planning Target Volume Coverage on the Dose Delivered in Lung Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekker, Chris; Wierzbicki, Marcin

    2016-08-15

    Purpose: In lung radiotherapy, breathing motion may be encompassed by contouring the internal target volume (ITV). Remaining uncertainties are included in a geometrical expansion to the planning target volume (PTV). In IMRT, the treatment is then optimized until a desired PTV fraction is covered by the appropriate dose. The resulting beams often carry high fluence in the PTV margin to overcome low lung density and to generate steep dose gradients. During treatment, the high density tumour can enter the PTV margin, potentially increasing target dose. Thus, planning lung IMRT with a reduced PTV dose may still achieve the desired ITVmore » dose during treatment. Methods: A retrospective analysis was carried out with 25 IMRT plans prescribed to 63 Gy in 30 fractions. The plans were re-normalized to cover various fractions of the PTV by different isodose lines. For each case, the isocentre was moved using 125 shifts derived from all 3D combinations of 0 mm, (PTV margin - 1 mm), and PTV margin. After each shift, the dose was recomputed to approximate the delivered dose. Results and Conclusion: Our plans typically cover 95% of the PTV by 95% of the dose. Reducing the PTV covered to 94% did not significantly reduce the delivered ITV doses for (PTV margin - 1 mm) shifts. Target doses were reduced significantly for all other shifts and planning goals studied. Thus, a reduced planning goal will likely deliver the desired target dose as long as the ITV rarely enters the last mm of the PTV margin.« less

  3. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  4. Marginal analysis in assessing factors contributing time to physician in the Emergency Department using operations data.

    PubMed

    Pathan, Sameer A; Bhutta, Zain A; Moinudheen, Jibin; Jenkins, Dominic; Silva, Ashwin D; Sharma, Yogdutt; Saleh, Warda A; Khudabakhsh, Zeenat; Irfan, Furqan B; Thomas, Stephen H

    2016-01-01

    Background: Standard Emergency Department (ED) operations goals include minimization of the time interval (tMD) between patients' initial ED presentation and initial physician evaluation. This study assessed factors known (or suspected) to influence tMD with a two-step goal. The first step was generation of a multivariate model identifying parameters associated with prolongation of tMD at a single study center. The second step was the use of a study center-specific multivariate tMD model as a basis for predictive marginal probability analysis; the marginal model allowed for prediction of the degree of ED operations benefit that would be affected with specific ED operations improvements. Methods: The study was conducted using one month (May 2015) of data obtained from an ED administrative database (EDAD) in an urban academic tertiary ED with an annual census of approximately 500,000; during the study month, the ED saw 39,593 cases. The EDAD data were used to generate a multivariate linear regression model assessing the various demographic and operational covariates' effects on the dependent variable tMD. Predictive marginal probability analysis was used to calculate the relative contributions of key covariates as well as demonstrate the likely tMD impact on modifying those covariates with operational improvements. Analyses were conducted with Stata 14MP, with significance defined at p  < 0.05 and confidence intervals (CIs) reported at the 95% level. Results: In an acceptable linear regression model that accounted for just over half of the overall variance in tMD (adjusted r 2 0.51), important contributors to tMD included shift census ( p  = 0.008), shift time of day ( p  = 0.002), and physician coverage n ( p  = 0.004). These strong associations remained even after adjusting for each other and other covariates. Marginal predictive probability analysis was used to predict the overall tMD impact (improvement from 50 to 43 minutes, p  < 0.001) of consistent staffing with 22 physicians. Conclusions: The analysis identified expected variables contributing to tMD with regression demonstrating significance and effect magnitude of alterations in covariates including patient census, shift time of day, and number of physicians. Marginal analysis provided operationally useful demonstration of the need to adjust physician coverage numbers, prompting changes at the study ED. The methods used in this analysis may prove useful in other EDs wishing to analyze operations information with the goal of predicting which interventions may have the most benefit.

  5. Impact of adhesive and photoactivation method on sealant integrity and polymer network formation.

    PubMed

    Borges, Boniek Castillo Dutra; Pereira, Fabrício Lopes da Rocha; Alonso, Roberta Caroline Bruschi; Braz, Rodivan; Montes, Marcos Antônio Japiassú Resende; Pinheiro, Isauremi Vieira de Assunção; Santos, Alex José Souza dos

    2012-01-01

    We evaluated the influence of photoactivation method and hydrophobic resin (HR) application on the marginal and internal adaptation, hardness (KHN), and crosslink density (CLD) of a resin-based fissure sealant. Model fissures were created in bovine enamel fragments (n = 10) and sealed using one of the following protocols: no adhesive system + photoactivation of the sealant using continuous light (CL), no adhesive system + photoactivation of the sealant using the soft-start method (SS), HR + CL, or HR + SS. Marginal and internal gaps and KHN were assessed after storage in water for 24 h. The CLD was indirectly assessed by repeating the KHN measurement after 24 h of immersion in 100% ethanol. There was no difference among the samples with regard to marginal or internal adaptation. The KHN and CLD were similar for samples cured using either photoactivation method. Use of a hydrophobic resin prior to placement of fissure sealants and curing the sealant using the soft-start method may not provide any positive influence on integrity or crosslink density.

  6. The Cascadia Subduction Zone: two contrasting models of lithospheric structure

    USGS Publications Warehouse

    Romanyuk, T.V.; Blakely, R.; Mooney, W.D.

    1998-01-01

    The Pacific margin of North America is one of the most complicated regions in the world in terms of its structure and present day geodynamic regime. The aim of this work is to develop a better understanding of lithospheric structure of the Pacific Northwest, in particular the Cascadia subduction zone of Southwest Canada and Northwest USA. The goal is to compare and contrast the lithospheric density structure along two profiles across the subduction zone and to interpet the differences in terms of active processes. The subduction of the Juan de Fuca plate beneath North America changes markedly along the length of the subduction zone, notably in the angle of subduction, distribution of earthquakes and volcanism, goelogic and seismic structure of the upper plate, and regional horizontal stress. To investigate these characteristics, we conducted detailed density modeling of the crust and mantle along two transects across the Cascadia subduction zone. One crosses Vancouver Island and the Canadian margin, the other crosses the margin of central Oregon.

  7. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  8. To kill a kangaroo: understanding the decision to pursue high-risk/high-gain resources.

    PubMed

    Jones, James Holland; Bird, Rebecca Bliege; Bird, Douglas W

    2013-09-22

    In this paper, we attempt to understand hunter-gatherer foraging decisions about prey that vary in both the mean and variance of energy return using an expected utility framework. We show that for skewed distributions of energetic returns, the standard linear variance discounting (LVD) model for risk-sensitive foraging can produce quite misleading results. In addition to creating difficulties for the LVD model, the skewed distributions characteristic of hunting returns create challenges for estimating probability distribution functions required for expected utility. We present a solution using a two-component finite mixture model for foraging returns. We then use detailed foraging returns data based on focal follows of individual hunters in Western Australia hunting for high-risk/high-gain (hill kangaroo) and relatively low-risk/low-gain (sand monitor) prey. Using probability densities for the two resources estimated from the mixture models, combined with theoretically sensible utility curves characterized by diminishing marginal utility for the highest returns, we find that the expected utility of the sand monitors greatly exceeds that of kangaroos despite the fact that the mean energy return for kangaroos is nearly twice as large as that for sand monitors. We conclude that the decision to hunt hill kangaroos does not arise simply as part of an energetic utility-maximization strategy and that additional social, political or symbolic benefits must accrue to hunters of this highly variable prey.

  9. The influence of coordinated defects on inhomogeneous broadening in cubic lattices

    NASA Astrophysics Data System (ADS)

    Matheson, P. L.; Sullivan, Francis P.; Evenson, William E.

    2016-12-01

    The joint probability distribution function (JPDF) of electric field gradient (EFG) tensor components in cubic materials is dominated by coordinated pairings of defects in shells near probe nuclei. The contributions from these inner shell combinations and their surrounding structures contain the essential physics that determine the PAC-relevant quantities derived from them. The JPDF can be used to predict the nature of inhomogeneous broadening (IHB) in perturbed angular correlation (PAC) experiments by modeling the G 2 spectrum and finding expectation values for V zz and η. The ease with which this can be done depends upon the representation of the JPDF. Expanding on an earlier work by Czjzek et al. (Hyperfine Interact. 14, 189-194, 1983), Evenson et al. (Hyperfine Interact. 237, 119, 2016) provide a set of coordinates constructed from the EFG tensor invariants they named W 1 and W 2. Using this parameterization, the JPDF in cubic structures was constructed using a point charge model in which a single trapped defect (TD) is the nearest neighbor to a probe nucleus. Individual defects on nearby lattice sites pair with the TD to provide a locus of points in the W 1- W 2 plane around which an amorphous-like distribution of probability density grows. Interestingly, however, marginal, separable PDFs appear adequate to model IHB relevant cases. We present cases from simulations in cubic materials illustrating the importance of these near-shell coordinations.

  10. Triton Southern Hemisphere

    NASA Image and Video Library

    1998-06-08

    This polar projection from NASA Voyager 2 of Triton southern hemisphere provides a view of the southern polar cap and bright equatorial fringe. The margin of the cap is scalloped and ranges in latitude from +10 degrees to -30 degrees. The bright fringe is closely associated with the cap's margin; from it, diffuse bright rays extend north-northeast for hundreds of kilometers. The bright fringe probably consists of very fresh nitrogen frost or snow, and the rays consist of bright-fringe materials that were redistributed by north-moving Coriolis-deflected winds. http://photojournal.jpl.nasa.gov/catalog/PIA00423

  11. Is Einsteinian no-signalling violated in Bell tests?

    NASA Astrophysics Data System (ADS)

    Kupczynski, Marian

    2017-11-01

    Relativistic invariance is a physical law verified in several domains of physics. The impossibility of faster than light influences is not questioned by quantum theory. In quantum electrodynamics, in quantum field theory and in the standard model relativistic invariance is incorporated by construction. Quantum mechanics predicts strong long range correlations between outcomes of spin projection measurements performed in distant laboratories. In spite of these strong correlations marginal probability distributions should not depend on what was measured in the other laboratory what is called shortly: non-signalling. In several experiments, performed to test various Bell-type inequalities, some unexplained dependence of empirical marginal probability distributions on distant settings was observed. In this paper we demonstrate how a particular identification and selection procedure of paired distant outcomes is the most probable cause for this apparent violation of no-signalling principle. Thus this unexpected setting dependence does not prove the existence of superluminal influences and Einsteinian no-signalling principle has to be tested differently in dedicated experiments. We propose a detailed protocol telling how such experiments should be designed in order to be conclusive. We also explain how magical quantum correlations may be explained in a locally causal way.

  12. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  13. Correlation of breast tissue histology and optical signatures to improve margin assessment techniques

    NASA Astrophysics Data System (ADS)

    Kennedy, Stephanie; Caldwell, Matthew; Bydlon, Torre; Mulvey, Christine; Mueller, Jenna; Wilke, Lee; Barry, William; Ramanujam, Nimmi; Geradts, Joseph

    2016-06-01

    Optical spectroscopy is sensitive to morphological composition and has potential applications in intraoperative margin assessment. Here, we evaluate ex vivo breast tissue and corresponding quantified hematoxylin & eosin images to correlate optical scattering signatures to tissue composition stratified by patient characteristics. Adipose sites (213) were characterized by their cell area and density. All other benign and malignant sites (181) were quantified using a grid method to determine composition. The relationships between mean reduced scattering coefficient (<μs‧>), and % adipose, % collagen, % glands, adipocyte cell area, and adipocyte density were investigated. These relationships were further stratified by age, menopausal status, body mass index (BMI), and breast density. We identified a positive correlation between <μs‧> and % collagen and a negative correlation between <μs‧> and age and BMI. Increased collagen corresponded to increased <μs‧> variability. In postmenopausal women, <μs‧> was similar regardless of fibroglandular content. Contributions from collagen and glands to <μs‧> were independent and equivalent in benign sites; glands showed a stronger positive correlation than collagen to <μs‧> in malignant sites. Our data suggest that scattering could differentiate highly scattering malignant from benign tissues in postmenopausal women. The relationship between scattering and tissue composition will support improved scattering models and technologies to enhance intraoperative optical margin assessment.

  14. Numerical experiments of volcanic dominated rifts and passive margins

    NASA Astrophysics Data System (ADS)

    Korchinski, Megan; Teyssier, Christian; Rey, Patrice; Whitney, Donna; Mondy, Luke

    2017-04-01

    Continental rifting is driven by plate tectonic forces (passive rifting), thermal thinning of the lithosphere over a hotspot (active rifting), or a combination of the two. Successful rifts develop into passive margins where pre-drift stretching is accompanied by normal faulting, clastic sedimentation, and various degrees of magmatism. The structure of volcanic passive margins (VPM) differs substantially from margins that are dominated by sedimentation. VPMs are typically narrow, with a lower continental crust that is intruded by magma and can flow as a low-viscosity layer. To investigate the role of the deep crust in the early development of VPMs, we have developed a suite of 2D thermomechanical numerical experiments (Underworld code) in which the density and viscosity of the deep crust and the density of the rift basin fill are systematically varied. Our experiments show that, for a given rifting velocity, the viscosity of the deep crust and the density of the rift basin fill exert primary controls on early VPM development. The viscosity of the deep crust controls the degree to which the shallow crust undergoes localised faulting or distributed thinning. A weak deep crust localises rifting and is efficiently exhumed to the near-surface, whereas a strong deep crust distributes shallow crust extension and remains buried. A high density rift basin fill results in gravitational loading and increased subsidence rate in cases in which the viscosity of the deep crust is sufficiently low to allow that layer to be displaced by the sinking basin fill. At the limit, a low viscosity deep crust overlain by a thick basalt-dominated fill generates a gravitational instability, with a drip of cool basalt that sinks and ponds at the Moho. Experiment results indicate that the deep crust plays a critical role in the dynamic development of volcanic dominated rifts and passive margins. During rifting, the deep continental crust is heated and readily responds to solicitations of the shallow crust (rooting of normal faults, exhumation of the deep crust in normal fault footwalls). Gravitational instabilities caused by high density rift infill similar to those observed in our numerical experiments may be present in the Mesoproterozoic ( 1100 Ma) North American Midcontinent Rift System (MRS). The MRS is a failed rift that is filled with >20 km of dominantly basaltic volcanic deposits, and therefore represents an end member VPM (high density basin fill) where the initial structure of a pre-drift VPM is preserved. Magmatism occurred in two pulses over <15 Ma involving deep mantle melting first (>150 km), then shallow melting (40-70 km). Post-rift subsidence accumulated up to 10 km of clastic sediments in the center of the basin. Evidence of cool, dense rocks sinking into a low-viscosity deep crust as predicted in our numerical experiments may be present in the western arm of the MRS, where crustal density analyses suggest the presence of dense bodies (eclogite) at the base of the crust.

  15. Cyclic steps and superimposed antidune deposits: important elements of coarse-grained deepwater channel-levée complexes

    NASA Astrophysics Data System (ADS)

    Lang, Joerg; Brandes, Christian; Winsemann, Jutta

    2017-04-01

    The facies distribution and architecture of submarine fans can be strongly impacted by erosion and deposition by supercritical density flows. We present field examples from the Sandino Forearc Basin (southern Central America), where cyclic-step and antidune deposits represent important sedimentary facies of coarse-grained channel-levée complexes. These bedforms occur in all sub-environments of the depositional systems and relate to the different stages of avulsion, bypass, levée construction and channel backfilling. Large-scale scours (18 to 29 m deep, 18 to 25 m wide, 60 to >120 m long) with an amalgamated infill, comprising massive, normally coarse-tail graded or spaced subhorizontally stratified conglomerates and pebbly sandstones, are interpreted as deposits of the hydraulic-jump zone of cyclic steps. These cyclic steps probably formed during avulsion, when high-density flows were routed into the evolving channel. The large-scale scour fills can be distinguished from small-scale channel fills based on the preservation of a steep upper margin and a coarse-grained infill comprising mainly amalgamated hydraulic-jump deposits. Channel fills include repetitive successions deposited by cyclic steps with superimposed antidunes. The hydraulic-jump zone of cyclic-step deposits comprises regularly spaced scours (0.2 to 2.6 m deep, 0.8 to 23 m wide), which are infilled by intraclast-rich conglomerates or pebbly sandstones and display normal coarse-tail grading or backsets. Laterally and vertically these deposits are associated with subhorizontally stratified, low-angle cross-stratified or sinusoidal stratified pebbly sandstones and sandstones (wavelength 0.5 to 18 m), interpreted as representing antidune deposits formed on the stoss-side of the cyclic steps during flow re-acceleration. The field examples indicate that so-called crudely or spaced stratified deposits may commonly represent antidune deposits with varying stratification styles controlled by the aggradation rate, grain-size distribution and amalgamation. The deposits of small-scale cyclic steps with superimposed antidunes form fining upwards successions with decreasing antidune wavelengths. Such cyclic step-antidune successions are the characteristic basal infill of channels, probably related to supercritical high-density turbidity flows triggered by retrogressive slope failures.

  16. Probabilistic pipe fracture evaluations for leak-rate-detection applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, S.; Ghadiali, N.; Paul, D.

    1995-04-01

    Regulatory Guide 1.45, {open_quotes}Reactor Coolant Pressure Boundary Leakage Detection Systems,{close_quotes} was published by the U.S. Nuclear Regulatory Commission (NRC) in May 1973, and provides guidance on leak detection methods and system requirements for Light Water Reactors. Additionally, leak detection limits are specified in plant Technical Specifications and are different for Boiling Water Reactors (BWRs) and Pressurized Water Reactors (PWRs). These leak detection limits are also used in leak-before-break evaluations performed in accordance with Draft Standard Review Plan, Section 3.6.3, {open_quotes}Leak Before Break Evaluation Procedures{close_quotes} where a margin of 10 on the leak detection limit is used in determining the crackmore » size considered in subsequent fracture analyses. This study was requested by the NRC to: (1) evaluate the conditional failure probability for BWR and PWR piping for pipes that were leaking at the allowable leak detection limit, and (2) evaluate the margin of 10 to determine if it was unnecessarily large. A probabilistic approach was undertaken to conduct fracture evaluations of circumferentially cracked pipes for leak-rate-detection applications. Sixteen nuclear piping systems in BWR and PWR plants were analyzed to evaluate conditional failure probability and effects of crack-morphology variability on the current margins used in leak rate detection for leak-before-break.« less

  17. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  18. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  19. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  20. Role of local to regional-scale collisions in the closure history of the Southern Neotethys, exemplified by tectonic development of the Kyrenia Range active margin/collisional lineament, N Cyprus

    NASA Astrophysics Data System (ADS)

    Robertson, Alastair; Kinnaird, Tim; McCay, Gillian; Palamakumbura, Romesh; Chen, Guohui

    2016-04-01

    Active margin processes including subduction, accretion, arc magmatism and back-arc extension play a key role in the diachronous, and still incomplete closure of the S Neotethys. The S Neotethys rifted along the present-day Africa-Eurasia continental margin during the Late Triassic and, after sea-floor spreading, began to close related to northward subduction during the Late Cretaceous. The northern, active continental margin of the S Neotethys was bordered by several of the originally rifted continental fragments (e.g. Taurides). The present-day convergent lineament ranges from subaqueous (e.g. Mediterranean Ridge), to subaerial (e.g. SE Turkey). The active margin development is partially obscured by microcontinent-continent collision and post-collisional strike-slip deformation (e.g. Tauride-Arabian suture). However, the Kyrenia Range, N Cyprus provides an outstanding record of convergent margin to early stage collisional processes. It owes its existence to strong localised uplift during the Pleistocene, which probably resulted from the collision of a continental promontory of N Africa (Eratosthenes Seamount) with the long-lived S Neotethyan active margin to the north. A multi-stage convergence history is revealed, mainly from a combination of field structural, sedimentological and igneous geochemical studies. Initial Late Cretaceous convergence resulted in greenschist facies burial metamorphism that is likely to have been related to the collision, then rapid exhumation, of a continental fragment (stage 1). During the latest Cretaceous-Palaeogene, the Kyrenia lineament was characterised by subduction-influenced magmatism and syn-tectonic sediment deposition. Early to Mid-Eocene, S-directed thrusting and folding (stage 2) is likely to have been influenced by the suturing of the Izmir-Ankara-Erzincan ocean to the north ('N Neotethys'). Convergence continued during the Neogene, dominated by deep-water terrigenous gravity-flow accumulation in a foredeep setting. Further S-directed compression took place during Late Miocene-earliest Pliocene (stage 3) in an oblique left-lateral stress regime, probably influenced by the collision of the Tauride and Arabian continents to the east. Strong uplift of the active margin lineament then took place during the Pleistocene, related to incipient continental collision (stage 4). The uplift is documented by a downward-younging flight of marine and continental terrace deposits on both flanks of the Kyrenia Range. The geological record of the S Neotethyan active continental margin, based on regional to global plate kinematic reconstructions, appears to have been dominated by on-going convergence (with possible temporal changes), punctuated by the effects of relatively local to regional-scale collisional events. Similar processes are likely to have affected other S Neotethyan segments and other convergent margins.

  1. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  3. 75 FR 39477 - New Standards for Domestic Mailing Services

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-09

    ... 2009 and FY 2010 in order to be eligible for participation. 2011 Saturation and High Density Incentive... letters and flats mailed at saturation and high density prices. This program would encourage mailers to increase the volume within two of our highest margin products and would be open to all mailers meeting the...

  4. Tectonics of some Amazonian greenstone belts

    NASA Technical Reports Server (NTRS)

    Gibbs, A. K.

    1986-01-01

    Greenstone belts exposed amid gneisses, granitoid rocks, and less abundant granulites along the northern and eastern margins of the Amazonian Craton yield Trans-Amazonican metamorphic ages of 2.0-2.1 Ga. Early proterozoic belts in the northern region probably originated as ensimatic island arc complexes. The Archean Carajas belt in the southeastern craton probably formed in an extensional basin on older continental basement. That basement contains older Archean belts with pillow basalts and komatiites. Belts of ultramafic rocks warrant investigatijon as possible ophiolites. A discussion follows.

  5. Robust Statistics and Regularization for Feature Extraction and UXO Discrimination

    DTIC Science & Technology

    2011-07-01

    July 11, 2011 real data we find that this technique has an improved probability of finding all ordnance in a test data set, relative to previously...many sites. Tests on larger data sets should still be carried out. In previous work we considered a bootstrapping approach to selecting the operating...Marginalizing over x we obtain the probability that the ith order statistic in the test data belongs to the T class (55) P (T |x(i)) = ∞∫ −∞ P (T |x)p(x

  6. Identification of Stochastically Perturbed Autonomous Systems from Temporal Sequences of Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing

    2018-03-01

    The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.

  7. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  8. Large impacts in the Baltic shield with special attention to the Uppland structure

    NASA Technical Reports Server (NTRS)

    Henkel, H.; Lilljequist, R.

    1992-01-01

    Within the Baltic Shield several very large structures have been identified and are suspected to be of meteorite impact origin. Some of these deeply eroded circular features are presented with special attention to the Uppland structure, where several indications point toward an impact origin in the mid-Proterozoic. The structures exceed 100 km in diameter and the topographic expression is inferior or absent. An arcuate arrangement of lithologies occurs around the margin of the structures and the central regions show conform magnetic and positive gravity anomalies. The Uppland structure is approximately 320 km in diameter as expressed by morphological, geological, and geophysical concentric patterns. The central part is topographically remarkably flat and is characterized by an unusual irregular fracture pattern. A subcircular central tonalite with density of 2.81 Mg(sup -3) gives a positive gravity anomaly of 35 mgal and the gravimetric profile is very similar to that of Manicouagan and Vredefort. The tonalite constitutes a huge antiform, 80 km in diameter, probably representing a 12-km structural uplift of infracrustal rocks. The flancs of the tonalite are characterized by recrystallized pseudotachylitic breccia dykes and breccia zones. Around the central parts amphibolite-grade metamorphic rocks appear as large fragments within a fine-grained granite interpreted as a thermally annealed melt rock. Several occurrences of breccia dykes and breccia-bearing melts have been identified about 100 km from the gravimetric center of the structure. Impact-related ore deposits are located around the margin of the structure and are interpreted as preexisting downfaulted iron formations, and deposits formed from remobilization of these preimpact occurrences. The so-called ball ores are interpreted to have formed by fluid injection similar to the formation of breccia dykes. The extensive hydrothermal alteration along the outer margin of the structure have created extreme soda and K-enriched rocks ('leptites') from preexisting gneiss granites and supracrustal sedimentary gneisses.

  9. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  10. Crustal architecture of the oblique-slip conjugate margins of George V Land and southeast Australia

    USGS Publications Warehouse

    Stagg, H.M.J.; Reading, A.M.

    2007-01-01

    A conceptual, lithospheric-scale cross-section of the conjugate, oblique-slip margins of George V Land, East Antarctica, and southeast Australia (Otway Basin) has been constructed based on the integration of seismic and sample data. This cross-section is characterised by asymmetry in width and thickness, and depth-dependent crustal extension at breakup in the latest Maastrichtian. The broad Antarctic margin (~360 km apparent rift width) developed on thick crust (~42 km) of the Antarctic craton, whereas the narrow Otway margin (~220 km) developed on the thinner crust (~31 km) of the Ross–Delamerian Orogen. The shallow basement (velocities ~5.5 km.s-1) and the deep continental crust (velocities >6.4 km.s-1) appear to be largely absent across the central rift, while the mid-crustal, probably granitic layer (velocities ~6 km.s-1) is preserved. Comparison with published numerical models suggests that the shallow basement and deep crust may have been removed by simple shear, whereas the mid-crust has been ductilely deformed.

  11. [Numeric simulation of functional remodeling of the anterior alveolar bone].

    PubMed

    Wang, Wei-feng; Xin, Hai-tao; Zang, Shun-lai; Ding, Jie

    2012-04-01

    To study the remodeling of the anterior alveolar bone with parodontium under physiology loading using finite element method (FEM) and theory of bone remodeling. A FEM model of the maxillary central incisor with parodontium was established, and the change of bone density during the remodeling of alveolar bone was investigated under physiology loading (60 - 150 N) based on the theory of bone remodeling about strain energy density (SED). The finite element analysis software Abaqus user material subroutine (UMAT) were used. With the increase of physiology loading, the pressure stress on the buccal cervical margin increased gradually while the density was decreased gradually. The cortical bone was lower than its initial density 1.74 g/cm(3), which was 1.74 - 1.63 g/cm(3). The density of cancellous bone was 0.90 - 0.77 g/cm(3), which was lower than its intial density 0.90 g/cm(3). The lingual cervical margin was under tensile stress which also increased with loading, the density had no significant change. When the achieve to 120 N, the density of cortical bone was 1.74 - 1.73 g/cm(3). No significant change was found in the cancellous bone. The simulation of the perodontium remodeling is achieved and proved to be effective by the relevant research based on the method of the study. And the result will be helpful to form the basis of analysis bone remodeling process and predict the results in the clinical work.

  12. Insights into the dynamics of planetary interiors obtained through the study of global distribution of volcanoes I: Empirical calibration on Earth

    NASA Astrophysics Data System (ADS)

    Cañon-Tapia, Edgardo; Mendoza-Borunda, Ramón

    2014-06-01

    The distribution of volcanic features is ultimately controlled by processes taking place beneath the surface of a planet. For this reason, characterization of volcano distribution at a global scale can be used to obtain insights concerning dynamic aspects of planetary interiors. Until present, studies of this type have focused on volcanic features of a specific type, or have concentrated on relatively small regions. In this paper, (the first of a series of three papers) we describe the distribution of volcanic features observed over the entire surface of the Earth, combining an extensive database of submarine and subaerial volcanoes. The analysis is based on spatial density contours obtained with the Fisher kernel. Based on an empirical approach that makes no a priori assumptions concerning the number of modes that should characterize the density distribution of volcanism we identified the most significant modes. Using those modes as a base, the relevant distance for the formation of clusters of volcanoes is constrained to be on the order of 100 to 200 km. In addition, it is noted that the most significant modes lead to the identification of clusters that outline the most important tectonic margins on Earth without the need of making any ad hoc assumptions. Consequently, we suggest that this method has the potential of yielding insights about the probable occurrence of tectonic features within other planets.

  13. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  14. Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?

    USGS Publications Warehouse

    Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.

    2005-01-01

    In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.

  15. Time-lagged response of carabid species richness and composition to past management practices and landscape context of semi-natural field margins.

    PubMed

    Alignier, Audrey; Aviron, Stéphanie

    2017-12-15

    Field margins are key features for the maintenance of biodiversity and associated ecosystem services in agricultural landscapes. Little is known about the effects of management practices of old semi-natural field margins, and their historical dimension regarding past management practices and landscape context is rarely considered. In this paper, the relative influence of recent and past management practices and landscape context (during the last five years) were assessed on the local biodiversity (species richness and composition) of carabid assemblages of field margins in agricultural landscapes of northwestern France. The results showed that recent patterns of carabid species richness and composition were best explained by management practices and landscape context measured four or five years ago. It suggests the existence of a time lag in the response of carabid assemblages to past environmental conditions of field margins. The relative contribution of past management practices and past landscape context varied depending on the spatial scale at which landscape context was taken into account. Carabid species richness was higher in grazed or sprayed field margins probably due to increased heterogeneity in habitat conditions. Field margins surrounded by grasslands and crops harbored species associated with open habitats whilst forest species dominated field margins surrounded by woodland. Landscape effect was higher at fine spatial scale, within 50 m around field margins. The present study highlights the importance of considering time-lagged responses of biodiversity when managing environment. It also suggests that old semi-natural field margins should not be considered as undisturbed habitats but more as management units being part of farming activities in agricultural landscapes, as for arable fields. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. The Demand for College Education in Postwar Japan.

    ERIC Educational Resources Information Center

    Nakata, Yoshi-fumi; Mosk, Carl

    1987-01-01

    The authors evaluate the extent to which economic factors underlie the expansion of Japanese college applications. Findings indicate that "marginal investors" respond to shortrun economic factors--including direct costs, household liquidity, and probability of entering a large firm--that govern higher education. Educational quality has…

  17. Geology of the offshore Southeast Georgia Embayment, U.S. Atlantic continental margin, based on multichannel seismic reflection profiles

    USGS Publications Warehouse

    Buffler, Richard T.; Watkins, Joel S.; Dillon, William P.

    1979-01-01

    The sedimentary section is divided into three major seismic intervals. The intervals are separated by unconformities and can be mapped regionally. The oldest interval ranges in age from Early Cretaceous through middle Late Cretaceous, although it may contain Jurassic rocks where it thickens beneath the Blake Plateau. It probably consists of continental to nearshore clastic rocks where it onlaps basement and grades seaward to a restricted carbonate platform facies (dolomite-evaporite). The middle interval (Upper Cretaceous) is characterized by prograding clinoforms interpreted as open marine slope deposits. This interval represents a Late Cretaceous shift of the carbonate shelf margin from the Blake Escarpment shoreward to about its present location, probably due to a combination of co tinued subsidence, an overall Late Cretaceous rise in sea level, and strong currents across the Blake Plateau. The youngest (Cenozoic) interval represents a continued seaward progradation of the continental shelf and slope. Cenozoic sedimentation on the Blake Plateau was much abbreviated owing mainly to strong currents.

  18. Retrogressive hydration of calc-silicate xenoliths in the eastern Bushveld complex: evidence for late magmatic fluid movement

    NASA Astrophysics Data System (ADS)

    Wallmach, T.; Hatton, C. J.; De Waal, S. A.; Gibson, R. L.

    1995-11-01

    Two calc-silicate xenoliths in the Upper Zone of the Bushveld complex contain mineral assemblages which permit delineation of the metamorphic path followed after incorporation of the xenoliths into the magma. Peak metamorphism in these xenoliths occurred at T=1100-1200°C and P <1.5 kbar. Retrograde metamorphism, probably coinciding with the late magmatic stage, is characterized by the breakdown of akermanite to monticellite and wollastonite at 700°C and the growth of vesuvianite from melilite. The latter implies that water-rich fluids (X CO 2 <0.2) were present and probably circulating through the cooling magmatic pile. In contrast, calc-silicate xenoliths within the lower zones of the Bushveld complex, namely in the Marginal and Critical Zones, also contain melilite, monticellite and additional periclase with only rare development of vesuvianite. This suggests that the Upper Zone cumulate pile was much 'wetter' in the late-magmatic stage than the earlier-formed Critical and Marginal Zone cumulate piles.

  19. A comparator-hypothesis account of biased contingency detection.

    PubMed

    Vadillo, Miguel A; Barberia, Itxaso

    2018-02-12

    Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Improving effectiveness of systematic conservation planning with density data.

    PubMed

    Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant

    2015-08-01

    Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.

  1. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  2. Continuous-time random-walk model for financial distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Weiss, George H.

    2003-02-01

    We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.

  3. Effect of Patient Set-up and Respiration motion on Defining Biological Targets for Image-Guided Targeted Radiotherapy

    NASA Astrophysics Data System (ADS)

    McCall, Keisha C.

    Identification and monitoring of sub-tumor targets will be a critical step for optimal design and evaluation of cancer therapies in general and biologically targeted radiotherapy (dose-painting) in particular. Quantitative PET imaging may be an important tool for these applications. Currently radiotherapy planning accounts for tumor motion by applying geometric margins. These margins create a motion envelope to encompass the most probable positions of the tumor, while also maintaining the appropriate tumor control and normal tissue complication probabilities. This motion envelope is effective for uniform dose prescriptions where the therapeutic dose is conformed to the external margins of the tumor. However, much research is needed to establish the equivalent margins for non-uniform fields, where multiple biological targets are present and each target is prescribed its own dose level. Additionally, the size of the biological targets and close proximity make it impractical to apply planning margins on the sub-tumor level. Also, the extent of high dose regions must be limited to avoid excessive dose to the surrounding tissue. As such, this research project is an investigation of the uncertainty within quantitative PET images of moving and displaced dose-painting targets, and an investigation of the residual errors that remain after motion management. This included characterization of the changes in PET voxel-values as objects are moved relative to the discrete sampling interval of PET imaging systems (SPECIFIC AIM 1). Additionally, the repeatability of PET distributions and the delineating dose-painting targets were measured (SPECIFIC AIM 2). The effect of imaging uncertainty on the dose distributions designed using these images (SPECIFIC AIM 3) has also been investigated. This project also included analysis of methods to minimize motion during PET imaging and reduce the dosimetric impact of motion/position-induced imaging uncertainty (SPECIFIC AIM 4).

  4. Short (≤ 1 mm) positive surgical margin and risk of biochemical recurrence after radical prostatectomy.

    PubMed

    Shikanov, Sergey; Marchetti, Pablo; Desai, Vikas; Razmaria, Aria; Antic, Tatjana; Al-Ahmadie, Hikmat; Zagaja, Gregory; Eggener, Scott; Brendler, Charles; Shalhav, Arieh

    2013-04-01

    WHAT'S KNOWN ON THE SUBJECT? AND WHAT DOES THE STUDY ADD?: It has been suggested that a very short positive margin does not confer additional risk of BCR after radical prostatectomy. This study shows that even very short PSM is associated with increased risk of BCR. To re-evaluate, in a larger cohort with longer follow-up, our previously reported finding that a positive surgical margin (PSM) ≤ 1 mm may not confer an additional risk for biochemical recurrence (BCR) compared with a negative surgical margin (NSM). Margin status and length were evaluated in 2866 men treated with radical prostatectomy (RP) for clinically localized prostate cancer at our institution from 1994 to 2009. We compared the BCR-free survival probability of men with NSMs, a PSM ≤ 1 mm, and a PSM < 1 mm using the Kaplan-Meier method and a Cox regression model adjusted for preoperative prostate-specific antigen (PSA) level, age, pathological stage and pathological Gleason score (GS). Compared with a NSM, a PSM ≤ 1 mm was associated with 17% lower 3-year BCR-free survival for men with pT3 and GS ≥ 7 tumours and a 6% lower 3-year BCR-free survival for men with pT2 and GS ≤ 6 tumours (log-rank P < 0.001 for all). In the multivariate model, a PSM ≤ 1 mm was associated with a probability of BCR twice as high as that for a NSM (hazard ratio [HR] 2.2), as were a higher PSA level (HR 1.04), higher pathological stage (HR 2.7) and higher pathological GS (HR 3.7 [all P < 0.001]). In men with non-organ-confined or high grade prostate cancer, a PSM ≤ 1 mm has a significant adverse impact on BCR rates. © 2012 The Authors. BJU International © 2012 BJU International.

  5. Adequacy of inhale/exhale breathhold CT based ITV margins and image-guided registration for free-breathing pancreas and liver SBRT.

    PubMed

    Yang, Wensha; Fraass, Benedick A; Reznik, Robert; Nissen, Nicholas; Lo, Simon; Jamil, Laith H; Gupta, Kapil; Sandler, Howard; Tuli, Richard

    2014-01-09

    To evaluate use of breath-hold CTs and implanted fiducials for definition of the internal target volume (ITV) margin for upper abdominal stereotactic body radiation therapy (SBRT). To study the statistics of inter- and intra-fractional motion information. 11 patients treated with SBRT for locally advanced pancreatic cancer (LAPC) or liver cancer were included in the study. Patients underwent fiducial implantation, free-breathing CT and breath-hold CTs at end inhalation/exhalation. All patients were planned and treated with SBRT using volumetric modulated arc therapy (VMAT). Two margin strategies were studied: Strategy I uses PTV = ITV + 3 mm; Strategy II uses PTV = GTV + 1.5 cm. Both CBCT and kV orthogonal images were taken and analyzed for setup before patient treatments. Tumor motion statistics based on skeletal registration and on fiducial registration were analyzed by fitting to Gaussian functions. All 11 patients met SBRT planning dose constraints using strategy I. Average ITV margins for the 11 patients were 2 mm RL, 6 mm AP, and 6 mm SI. Skeletal registration resulted in high probability (RL = 69%, AP = 4.6%, SI = 39%) that part of the tumor will be outside the ITV. With the 3 mm ITV expansion (Strategy 1), the probability reduced to RL 32%, AP 0.3%, SI 20% for skeletal registration; and RL 1.2%, AP 0%, SI 7% for fiducial registration. All 7 pancreatic patients and 2 liver patients failed to meet SBRT dose constraints using strategy II. The liver dose was increased by 36% for the other 2 liver patients that met the SBRT dose constraints with strategy II. Image guidance matching to skeletal anatomy is inadequate for SBRT positioning in the upper abdomen and usage of fiducials is highly recommended. Even with fiducial implantation and definition of an ITV, a minimal 3 mm planning margin around the ITV is needed to accommodate intra-fractional uncertainties.

  6. The Independent Effects of Phonotactic Probability and Neighbourhood Density on Lexical Acquisition by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Lee, Su-Yeon

    2011-01-01

    The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…

  7. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    ERIC Educational Resources Information Center

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  8. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  9. Understanding environmental DNA detection probabilities: A case study using a stream-dwelling char Salvelinus fontinalis

    USGS Publications Warehouse

    Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.

    2016-01-01

    Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.

  10. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  11. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  12. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  13. Competition between harvester ants and rodents in the cold desert

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.

    1979-09-30

    Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less

  14. Oviposition substrate in Asian tiger mosquito surveillance: Do the sizes matter?

    PubMed

    Falsone, Luigi; Brianti, Emanuele; Severini, Francesco; Giannetto, Salvatore; Romi, Roberto

    2015-12-01

    Ovitraps are regarded as a reliable system to monitor Aedes albopictus dynamics. However, the dimensions of the oviposition substrate are not standardized, and no studies have investigated which should be the most effective sizes. In this study, the effect of paddle sizes in tiger mosquito egg collection was evaluated. Egg count and density on the wide surfaces and margins of different-sized oviposition substrates have been evaluated in two studies (A and B). In study A, a total of 29,995 Ae. albopictus eggs was counted in 250 classic oviposition substrates. Eggs were found on both wide surfaces (53.1%) and margins (46.9%). Egg density was significantly larger in margins compared to wide surfaces. Overall in study B, 983 Ae. albopictus eggs were collected. According to paddle sizes, 51.8% of eggs were on large and 48.2% on small paddles. Mean egg density of wide surfaces was significantly larger in small paddles (0.25 eggs/cm(2) ) compared to large paddles (0.06 eggs/cm(2) ). Results indicate that wider oviposition substrates do not mean larger number of Ae. albopictus eggs. Indeed, on paddles four times thinner than others, the number of eggs counted was not statistically different. These findings suggest that small paddles may be routinely employed in ovitraps, thus allowing savings of materials and money. © 2015 The Society for Vector Ecology.

  15. Effective elastic thickness along the conjugate passive margins of India, Madagascar and Antarctica: A re-evaluation using the Hermite multitaper Bouguer coherence application

    NASA Astrophysics Data System (ADS)

    Ratheesh-Kumar, R. T.; Xiao, Wenjiao

    2018-05-01

    Gondwana correlation studies had rationally positioned the western continental margin of India (WCMI) against the eastern continental margin of Madagascar (ECMM), and the eastern continental margin of India (ECMI) against the eastern Antarctica continental margin (EACM). This contribution computes the effective elastic thickness (Te) of the lithospheres of these once-conjugated continental margins using the multitaper Bouguer coherence method. The results reveal significantly low strength values (Te ∼ 2 km) in the central segment of the WCMI that correlate with consistently low Te values (2-3 km) obtained throughout the entire marginal length of the ECMM. This result is consistent with the previous Te estimates of these margins, and confirms the idea that the low-Te segments in the central part of the WCMI and along the ECMM represents paleo-rift inception points of the lithospheric margins that was thermally and mechanically weakened by the combined action of the Marion hotspot and lithospheric extension during the rifting. The uniformly low-Te value (∼2 km) along the EACM indicates a mechanically weak lithospheric margin, probably due to considerable stretching of the lithosphere, considering the fact that this margin remained almost stationary throughout its rift history. In contrast, the ECMI has comparatively high-Te variations (5-11 km) that lack any correlation with the regional tectonic setting. Using gravity forward and inversion applications, we find a leading order of influence of sediment load on the flexural properties of this marginal lithosphere. The study concludes that the thick pile of the Bengal Fan sediments in the ECMI masks and has erased the signal of the original load-induced topography, and its gravity effect has biased the long-wavelength part of the observed gravity signal. The hence uncorrelated flat topography and deep lithospheric flexure together contribute a bias in the flexure modeling, which likely accounts a relatively high Te estimate.

  16. New X-ray bound on density of primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inoue, Yoshiyuki; Kusenko, Alexander, E-mail: yinoue@astro.isas.jaxa.jp, E-mail: kusenko@ucla.edu

    We set a new upper limit on the abundance of primordial black holes (PBH) based on existing X-ray data. PBH interactions with interstellar medium should result in significant fluxes of X-ray photons, which would contribute to the observed number density of compact X-ray objects in galaxies. The data constrain PBH number density in the mass range from a few M {sub ⊙} to 2× 10{sup 7} M {sub ⊙}. PBH density needed to account for the origin of black holes detected by LIGO is marginally allowed.

  17. [Effect of stock abundance and environmental factors on the recruitment success of small yellow croaker in the East China Sea].

    PubMed

    Liu, Zun-lei; Yuan, Xing-wei; Yang, Lin-lin; Yan, Li-ping; Zhang, Hui; Cheng, Jia-hua

    2015-02-01

    Multiple hypotheses are available to explain recruitment rate. Model selection methods can be used to identify the best model that supports a particular hypothesis. However, using a single model for estimating recruitment success is often inadequate for overexploited population because of high model uncertainty. In this study, stock-recruitment data of small yellow croaker in the East China Sea collected from fishery dependent and independent surveys between 1992 and 2012 were used to examine density-dependent effects on recruitment success. Model selection methods based on frequentist (AIC, maximum adjusted R2 and P-values) and Bayesian (Bayesian model averaging, BMA) methods were applied to identify the relationship between recruitment and environment conditions. Interannual variability of the East China Sea environment was indicated by sea surface temperature ( SST) , meridional wind stress (MWS), zonal wind stress (ZWS), sea surface pressure (SPP) and runoff of Changjiang River ( RCR). Mean absolute error, mean squared predictive error and continuous ranked probability score were calculated to evaluate the predictive performance of recruitment success. The results showed that models structures were not consistent based on three kinds of model selection methods, predictive variables of models were spawning abundance and MWS by AIC, spawning abundance by P-values, spawning abundance, MWS and RCR by maximum adjusted R2. The recruitment success decreased linearly with stock abundance (P < 0.01), suggesting overcompensation effect in the recruitment success might be due to cannibalism or food competition. Meridional wind intensity showed marginally significant and positive effects on the recruitment success (P = 0.06), while runoff of Changjiang River showed a marginally negative effect (P = 0.07). Based on mean absolute error and continuous ranked probability score, predictive error associated with models obtained from BMA was the smallest amongst different approaches, while that from models selected based on the P-value of the independent variables was the highest. However, mean squared predictive error from models selected based on the maximum adjusted R2 was highest. We found that BMA method could improve the prediction of recruitment success, derive more accurate prediction interval and quantitatively evaluate model uncertainty.

  18. OCT structure, COB location and magmatic type of the SE Brazilian & S Angolan margins from integrated quantitative analysis of deep seismic reflection and gravity anomaly data

    NASA Astrophysics Data System (ADS)

    Cowie, L.; Kusznir, N. J.; Horn, B.

    2013-12-01

    Knowledge of ocean-continent transition (OCT) structure, continent-ocean boundary (COB) location and magmatic type are of critical importance for understanding rifted continental margin formation processes and in evaluating petroleum systems in deep-water frontier oil and gas exploration. The OCT structure, COB location and magmatic type of the SE Brazilian and S Angolan rifted continental margins are much debated; exhumed and serpentinised mantle have been reported at these margins. Integrated quantitative analysis using deep seismic reflection data and gravity inversion have been used to determine OCT structure, COB location and magmatic type for the SE Brazilian and S Angolan margins. Gravity inversion has been used to determine Moho depth, crustal basement thickness and continental lithosphere thinning. Residual Depth Anomaly (RDA) analysis has been used to investigate OCT bathymetric anomalies with respect to expected oceanic bathymetries and subsidence analysis has been used to determine the distribution of continental lithosphere thinning. These techniques have been validated on the Iberian margin for profiles IAM9 and ISE-01. In addition a joint inversion technique using deep seismic reflection and gravity anomaly data has been applied to the ION-GXT BS1-575 SE Brazil and ION-GXT CS1-2400 S Angola. The joint inversion method solves for coincident seismic and gravity Moho in the time domain and calculates the lateral variations in crustal basement densities and velocities along profile. Gravity inversion, RDA and subsidence analysis along the S Angolan ION-GXT CS1-2400 profile has been used to determine OCT structure and COB location. Analysis suggests that exhumed mantle, corresponding to a magma poor margin, is absent beneath the allochthonous salt. The thickness of earliest oceanic crust, derived from gravity and deep seismic reflection data is approximately 7km. The joint inversion predicts crustal basement densities and seismic velocities which are slightly less than expected for 'normal' oceanic crust. The difference between the sediment corrected RDA and that predicted from gravity inversion crustal thickness variation implies that this margin is experiencing ~300m of anomalous uplift attributed to mantle dynamic uplift. Gravity inversion, RDA and subsidence analysis have also been used to determine OCT structure and COB location along the ION-GXT BS1-575 profile, crossing the Sao Paulo Plateau and Florianopolis Ridge of the SE Brazilian margin. Gravity inversion, RDA and subsidence analysis predict the COB to be located SE of the Florianopolis Ridge. Analysis shows no evidence for exhumed mantle on this margin profile. The joint inversion technique predicts normal oceanic basement seismic velocities and densities and beneath the Sao Paulo Plateau and Florianopolis Ridge predicts crustal basement thicknesses between 10-15km. The Sao Paulo Plateau and Florianopolis Ridge are separated by a thin region of crustal basement beneath the salt interpreted as a regional transtensional structure. Sediment corrected RDAs and gravity derived 'synthetic' RDAs are of a similar magnitude on oceanic crust, implying negligible mantle dynamic topography.

  19. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  20. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  1. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  2. Impact of prostate weight on probability of positive surgical margins in patients with low-risk prostate cancer after robotic-assisted laparoscopic radical prostatectomy.

    PubMed

    Marchetti, Pablo E; Shikanov, Sergey; Razmaria, Aria A; Zagaja, Gregory P; Shalhav, Arieh L

    2011-03-01

    To evaluate the impact of prostate weight (PW) on probability of positive surgical margin (PSM) in patients undergoing robotic-assisted radical prostatectomy (RARP) for low-risk prostate cancer. The cohort consisted of 690 men with low-risk prostate cancer (clinical stage T1c, prostate-specific antigen <10 ng/mL, biopsy Gleason score ≤6) who underwent RARP with bilateral nerve-sparing at our institution by 1 of 2 surgeons from 2003 to 2009. PW was obtained from the pathologic specimen. The association between probability of PSM and PW was assessed with univariate and multivariate logistic regression analysis. A PSM was identified in 105 patients (15.2%). Patients with PSM had significant higher prostate-specific antigen (P = .04), smaller prostates (P = .0001), higher Gleason score (P = .004), and higher pathologic stage (P < .0001). After logistic regression, we found a significant inverse relation between PSM and PW (OR 0.97%; 95% confidence interval [CI] 0.96, 0.99; P = .0003) in univariate analysis. This remained significant in the multivariate model (OR 0.98%; 95% CI 0.96, 0.99; P = .006) adjusting for age, body mass index, surgeon experience, pathologic Gleason score, and pathologic stage. In this multivariate model, the predicted probability of PSM for 25-, 50-, 100-, and 150-g prostates were 22% (95% CI 16%, 30%), 13% (95% CI 11%, 16%), 5% (95% CI 1%, 8%), and 1% (95% CI 0%, 3%), respectively. Lower PW is independently associated with higher probability of PSM in low-risk patients undergoing RARP with bilateral nerve-sparing. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. The influence of maximum magnitude on seismic-hazard estimates in the Central and Eastern United States

    USGS Publications Warehouse

    Mueller, C.S.

    2010-01-01

    I analyze the sensitivity of seismic-hazard estimates in the central and eastern United States (CEUS) to maximum magnitude (mmax) by exercising the U.S. Geological Survey (USGS) probabilistic hazard model with several mmax alternatives. Seismicity-based sources control the hazard in most of the CEUS, but data seldom provide an objective basis for estimating mmax. The USGS uses preferred mmax values of moment magnitude 7.0 and 7.5 for the CEUS craton and extended margin, respectively, derived from data in stable continental regions worldwide. Other approaches, for example analysis of local seismicity or judgment about a source's seismogenic potential, often lead to much smaller mmax. Alternative models span the mmax ranges from the 1980s Electric Power Research Institute/Seismicity Owners Group (EPRI/SOG) analysis. Results are presented as haz-ard ratios relative to the USGS national seismic hazard maps. One alternative model specifies mmax equal to moment magnitude 5.0 and 5.5 for the craton and margin, respectively, similar to EPRI/SOG for some sources. For 2% probability of exceedance in 50 years (about 0.0004 annual probability), the strong mmax truncation produces hazard ratios equal to 0.35-0.60 for 0.2-sec spectral acceleration, and 0.15-0.35 for 1.0-sec spectral acceleration. Hazard-controlling earthquakes interact with mmax in complex ways. There is a relatively weak dependence on probability level: hazardratios increase 0-15% for 0.002 annual exceedance probability and decrease 5-25% for 0.00001 annual exceedance probability. Although differences at some sites are tempered when faults are added, mmax clearly accounts for some of the discrepancies that are seen in comparisons between USGS-based and EPRI/SOG-based hazard results.

  4. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  5. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  6. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  7. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  8. Evolution of density-dependent movement during experimental range expansions.

    PubMed

    Fronhofer, E A; Gut, S; Altermatt, F

    2017-12-01

    Range expansions and biological invasions are prime examples of transient processes that are likely impacted by rapid evolutionary changes. As a spatial process, range expansions are driven by dispersal and movement behaviour. Although it is widely accepted that dispersal and movement may be context-dependent, for instance density-dependent, and best represented by reaction norms, the evolution of density-dependent movement during range expansions has received little experimental attention. We therefore tested current theory predicting the evolution of increased movement at low densities at range margins using highly replicated and controlled range expansion experiments across multiple genotypes of the protist model system Tetrahymena thermophila. Although rare, we found evolutionary changes during range expansions even in the absence of initial standing genetic variation. Range expansions led to the evolution of negatively density-dependent movement at range margins. In addition, we report the evolution of increased intrastrain competitive ability and concurrently decreased population growth rates in range cores. Our findings highlight the importance of understanding movement and dispersal as evolving reaction norms and plastic life-history traits of central relevance for range expansions, biological invasions and the dynamics of spatially structured systems in general. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  9. Early Oral Tongue Squamous Cell Carcinoma: Sampling of Margins From Tumor Bed and Worse Local Control.

    PubMed

    Maxwell, Jessica H; Thompson, Lester D R; Brandwein-Gensler, Margaret S; Weiss, Bernhard G; Canis, Martin; Purgina, Bibianna; Prabhu, Arpan V; Lai, Chi; Shuai, Yongli; Carroll, William R; Morlandt, Anthony; Duvvuri, Umamaheswar; Kim, Seungwon; Johnson, Jonas T; Ferris, Robert L; Seethala, Raja; Chiosea, Simion I

    2015-12-01

    Positive margins are associated with poor prognosis among patients with oral tongue squamous cell carcinoma (SCC). However, wide variation exists in the margin sampling technique. To determine the effect of the margin sampling technique on local recurrence (LR) in patients with stage I or II oral tongue SCC. A retrospective study was conducted from January 1, 1986, to December 31, 2012, in 5 tertiary care centers following tumor resection and elective neck dissection in 280 patients with pathologic (p)T1-2 pN0 oral tongue SCC. Analysis was conducted from June 1, 2013, to January 20, 2015. In group 1 (n = 119), tumor bed margins were not sampled. In group 2 (n = 61), margins were examined from the glossectomy specimen, found to be positive or suboptimal, and revised with additional tumor bed margins. In group 3 (n = 100), margins were primarily sampled from the tumor bed without preceding examination of the glossectomy specimen. The margin status (both as a binary [positive vs negative] and continuous [distance to the margin in millimeters] variable) and other clinicopathologic parameters were compared across the 3 groups and correlated with LR. Local recurrence. Age, sex, pT stage, lymphovascular or perineural invasion, and adjuvant radiation treatment were similar across the 3 groups. The probability of LR-free survival at 3 years was 0.9 and 0.8 in groups 1 and 3, respectively (P = .03). The frequency of positive glossectomy margins was lowest in group 1 (9 of 117 [7.7%]) compared with groups 2 and 3 (28 of 61 [45.9%] and 23 of 95 [24.2%], respectively) (P < .001). Even after excluding cases with positive margins, the median distance to the closest margin was significantly narrower in group 3 (2 mm) compared with group 1 (3 mm) (P = .008). The status (positive vs negative) of margins obtained from the glossectomy specimen correlated with LR (P = .007), while the status of tumor bed margins did not. The status of the tumor bed margin was 24% sensitive (95% CI, 16%-34%) and 92% specific (95% CI, 85%-97%) for detecting a positive glossectomy margin. The margin sampling technique affects local control in patients with oral tongue SCC. Reliance on margin sampling from the tumor bed is associated with worse local control, most likely owing to narrower margin clearance and greater incidence of positive margins. A resection specimen-based margin assessment is recommended.

  10. Early Oral Tongue Squamous Cell Carcinoma Sampling of Margins From Tumor Bed and Worse Local Control

    PubMed Central

    Maxwell, Jessica H.; Thompson, Lester D. R.; Brandwein-Gensler, Margaret S.; Weiss, Bernhard G.; Canis, Martin; Purgina, Bibianna; Prabhu, Arpan V.; Lai, Chi; Shuai, Yongli; Carroll, William R.; Morlandt, Anthony; Duvvuri, Umamaheswar; Kim, Seungwon; Johnson, Jonas T.; Ferris, Robert L.; Seethala, Raja; Chiosea, Simion I.

    2017-01-01

    IMPORTANCE Positive margins are associated with poor prognosis among patients with oral tongue squamous cell carcinoma (SCC). However, wide variation exists in the margin sampling technique. OBJECTIVE To determine the effect of the margin sampling technique on local recurrence (LR) in patients with stage I or II oral tongue SCC. DESIGN, SETTING, AND PARTICIPANTS A retrospective study was conducted from January 1, 1986, to December 31, 2012, in 5 tertiary care centers following tumor resection and elective neck dissection in 280 patients with pathologic (p)T1-2 pN0 oral tongue SCC. Analysis was conducted from June 1, 2013, to January 20, 2015. INTERVENTIONS In group 1 (n = 119), tumor bed margins were not sampled. In group 2 (n = 61), margins were examined from the glossectomy specimen, found to be positive or suboptimal, and revised with additional tumor bed margins. In group 3 (n = 100), margins were primarily sampled from the tumor bed without preceding examination of the glossectomy specimen. The margin status (both as a binary [positive vs negative] and continuous [distance to the margin in millimeters] variable) and other clinicopathologic parameters were compared across the 3 groups and correlated with LR. MAIN OUTCOMES AND MEASURES Local recurrence. RESULTS Age, sex, pT stage, lymphovascular or perineural invasion, and adjuvant radiation treatment were similar across the 3 groups. The probability of LR-free survival at 3 years was 0.9 and 0.8 in groups 1 and 3, respectively (P = .03). The frequency of positive glossectomy margins was lowest in group 1 (9 of 117 [7.7%]) compared with groups 2 and 3 (28 of 61 [45.9%] and 23 of 95 [24.2%], respectively) (P < .001). Even after excluding cases with positive margins, the median distance to the closest margin was significantly narrower in group 3 (2 mm) compared with group 1 (3 mm) (P = .008). The status (positive vs negative) of margins obtained from the glossectomy specimen correlated with LR (P = .007), while the status of tumor bed margins did not. The status of the tumor bed margin was 24% sensitive (95% CI, 16%-34%) and 92% specific (95% CI, 85%-97%) for detecting a positive glossectomy margin. CONCLUSIONS AND RELEVANCE The margin sampling technique affects local control in patients with oral tongue SCC. Reliance on margin sampling from the tumor bed is associated with worse local control, most likely owing to narrower margin clearance and greater incidence of positive margins. A resection specimen–based margin assessment is recommended. PMID:26225798

  11. Propagation measurements for an Australian land mobile satellite system

    NASA Technical Reports Server (NTRS)

    Bundrock, Anthony J.; Harvey, Robert

    1988-01-01

    Measurements of attenuation statistics using a helicopter and an instrumented van are discussed. Results are given for two different tree densities, for elevation angles of 30, 45 and 60 degrees and for frequencies of 893, 1550 and 2660 MHz. These results show that at 1550 MHz and 45 degrees elevation angle, attenuation values of 5.0 and 8.6 dB were exceeded 10 percent of the time for roadside tree densities of 35 percent and 85 percent respectively. Comparisons with other results made in the Northern Hemisphere are made and show general agreement. The implication of the measured values on system design are discussed, and it is shown that, for Australia, an adaptive margin allocation scheme would require an average margin of approximately 5 dB.

  12. Cold Seep Epifaunal Communities on the Hikurangi Margin, New Zealand: Composition, Succession, and Vulnerability to Human Activities

    PubMed Central

    Bowden, David A.; Rowden, Ashley A.; Thurber, Andrew R.; Baco, Amy R.; Levin, Lisa A.; Smith, Craig R.

    2013-01-01

    Cold seep communities with distinctive chemoautotrophic fauna occur where hydrocarbon-rich fluids escape from the seabed. We describe community composition, population densities, spatial extent, and within-region variability of epifaunal communities at methane-rich cold seep sites on the Hikurangi Margin, New Zealand. Using data from towed camera transects, we match observations to information about the probable life-history characteristics of the principal fauna to develop a hypothetical succession sequence for the Hikurangi seep communities, from the onset of fluid flux to senescence. New Zealand seep communities exhibit taxa characteristic of seeps in other regions, including predominance of large siboglinid tubeworms, vesicomyid clams, and bathymodiolin mussels. Some aspects appear to be novel; however, particularly the association of dense populations of ampharetid polychaetes with high-sulphide, high-methane flux, soft-sediment microhabitats. The common occurrence of these ampharetids suggests they play a role in conditioning sulphide-rich sediments at the sediment-water interface, thus facilitating settlement of clam and tubeworm taxa which dominate space during later successional stages. The seep sites are subject to disturbance from bottom trawling at present and potentially from gas hydrate extraction in future. The likely life-history characteristics of the dominant megafauna suggest that while ampharetids, clams, and mussels exploit ephemeral resources through rapid growth and reproduction, lamellibrachid tubeworm populations may persist potentially for centuries. The potential consequences of gas hydrate extraction cannot be fully assessed until extraction methods and target localities are defined but any long-term modification of fluid flow to seep sites would have consequences for all chemoautotrophic fauna. PMID:24204691

  13. Cold seep epifaunal communities on the Hikurangi margin, New Zealand: composition, succession, and vulnerability to human activities.

    PubMed

    Bowden, David A; Rowden, Ashley A; Thurber, Andrew R; Baco, Amy R; Levin, Lisa A; Smith, Craig R

    2013-01-01

    Cold seep communities with distinctive chemoautotrophic fauna occur where hydrocarbon-rich fluids escape from the seabed. We describe community composition, population densities, spatial extent, and within-region variability of epifaunal communities at methane-rich cold seep sites on the Hikurangi Margin, New Zealand. Using data from towed camera transects, we match observations to information about the probable life-history characteristics of the principal fauna to develop a hypothetical succession sequence for the Hikurangi seep communities, from the onset of fluid flux to senescence. New Zealand seep communities exhibit taxa characteristic of seeps in other regions, including predominance of large siboglinid tubeworms, vesicomyid clams, and bathymodiolin mussels. Some aspects appear to be novel; however, particularly the association of dense populations of ampharetid polychaetes with high-sulphide, high-methane flux, soft-sediment microhabitats. The common occurrence of these ampharetids suggests they play a role in conditioning sulphide-rich sediments at the sediment-water interface, thus facilitating settlement of clam and tubeworm taxa which dominate space during later successional stages. The seep sites are subject to disturbance from bottom trawling at present and potentially from gas hydrate extraction in future. The likely life-history characteristics of the dominant megafauna suggest that while ampharetids, clams, and mussels exploit ephemeral resources through rapid growth and reproduction, lamellibrachid tubeworm populations may persist potentially for centuries. The potential consequences of gas hydrate extraction cannot be fully assessed until extraction methods and target localities are defined but any long-term modification of fluid flow to seep sites would have consequences for all chemoautotrophic fauna.

  14. Discretized kinetic theory on scale-free networks

    NASA Astrophysics Data System (ADS)

    Bertotti, Maria Letizia; Modanese, Giovanni

    2016-10-01

    The network of interpersonal connections is one of the possible heterogeneous factors which affect the income distribution emerging from micro-to-macro economic models. In this paper we equip our model discussed in [1, 2] with a network structure. The model is based on a system of n differential equations of the kinetic discretized-Boltzmann kind. The network structure is incorporated in a probabilistic way, through the introduction of a link density P(α) and of correlation coefficients P(β|α), which give the conditioned probability that an individual with α links is connected to one with β links. We study the properties of the equations and give analytical results concerning the existence, normalization and positivity of the solutions. For a fixed network with P(α) = c/α q , we investigate numerically the dependence of the detailed and marginal equilibrium distributions on the initial conditions and on the exponent q. Our results are compatible with those obtained from the Bouchaud-Mezard model and from agent-based simulations, and provide additional information about the dependence of the individual income on the level of connectivity.

  15. Some practical aspects of prestack waveform inversion using a genetic algorithm: An example from the east Texas Woodbine gas sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mallick, S.

    1999-03-01

    In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte-Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of thesemore » PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.« less

  16. Geological Mapping of Pluto and Charon Using New Horizons Data

    NASA Astrophysics Data System (ADS)

    Moore, J. M.; Spencer, J. R.; McKinnon, W. B.; Howard, A. D.; White, O. M.; Umurhan, O. M.; Schenk, P. M.; Beyer, R. A.; Singer, K.; Stern, S. A.; Weaver, H. A.; Young, L. A.; Ennico Smith, K.; Olkin, C.; Horizons Geology, New; Geophysics Imaging Team

    2016-06-01

    Pluto and Charon exhibit strikingly different surface appearances, despite their similar densities and presumed bulk compositions. Systematic mapping has revealed that much of Pluto's surface can be attributed to surface-atmosphere interactions and the mobilization of volatile ices by insolation. Many mapped valley systems appear to be the consequence of glaciation involving nitrogen ice. Other geological activity requires or required internal heating. The convection and advection of volatile ices in Sputnik Planum can be powered by present-day radiogenic heat loss. On the other hand, the prominent mountains at the western margin of Sputnik Planum, and the strange, multi-km-high mound features to the south, probably composed of H2O, are young geologically as inferred by light cratering and superposition relationships. Their origin, and what drove their formation so late in Solar System history, is under investigation. The dynamic remolding of landscapes by volatile transport seen on Pluto is not unambiguously evident in the mapping of Charon. Charon does, however, display a large resurfaced plain and globally engirdling extensional tectonic network attesting to its early endogenic vigor.

  17. The Geology of Pluto and Charon as Revealed by New Horizons

    NASA Astrophysics Data System (ADS)

    Moore, Jeffrey M.; Spencer, John R.; McKinnon, William B.; Stern, S. Alan; Young, Leslie A.; Weaver, Harold A.; Olkin, Cathy B.; Ennico, Kim; New Horizons GGI Team

    2016-04-01

    NASA's New Horizons spacecraft has revealed that Pluto and Charon exhibit strikingly different surface appearances, despite their similar densities and presumed bulk compositions. Much of Pluto's surface can be attributed to surface-atmosphere interactions and the mobilization of volatile ices by insolation. Many valley systems appear to be the consequence of glaciation involving nitrogen ice. Other geological activity requires or required internal heating. The convection and advection of volatile ices in Sputnik Planum can be powered by present-day radiogenic heat loss. On the other hand, the prominent mountains at the western margin of Sputnik Planum, and the strange, multi-km-high mound features to the south, probably composed of H2O, are young geologically as inferred by light cratering and superposition relationships. Their origin, and what drove their formation so late in Solar System history, is under investigation. The dynamic remolding of landscapes by volatile transport seen on Pluto is not unambiguously evident on Charon. Charon does, however, display a large resurfaced plain and globally engirdling extensional tectonic network attesting to its early endogenic vigor.

  18. Distribution of total and fecal coliform organisms from septic effluent in selected coastal plain soils.

    PubMed Central

    Reneau, R B; Pettry, D E; Shanholtz, M I; Graham, S A; Weston, C W

    1977-01-01

    Distribution of total and fecal coliform bacteria in three Atlantic coastal plain soils in Virginia were monitored in situ over a 3-year period. The soils studied were Varina, Goldsboro, and Beltsville sandy loams. These and similar soils are found extensively along the populous Atlantic seaboard of the United States. They are considered only marginally suitable for septic tank installation because the restricting soil layers result in the subsequent development of seasonal perched water tables. To determine both horizontal and vertical movement of indicator organisms, samples were collected from piezometers placed at selected distances and depths from the drainfields in the direction of the ground water flow. Large reductions in total and fecal coliform bacteria were noted in the perched ground waters above the restricting layers as distance from the drainfield increased. These restricting soil layers appear to be effective barriers to the vertical movement of indicator organisms. The reduction in the density of the coliform bacteria above the restricting soil layers can probably be attributed to dilution, filtration, and dieoff as the bacteria move through the natural soil systems. PMID:325589

  19. The Geology of Pluto and Charon as Revealed by New Horizons

    NASA Technical Reports Server (NTRS)

    Moore, Jeffrey M.; Spencer, John R.; McKinnon, William B.; Stern, S. Alan; Young, Leslie A.; Weaver, Harold A.; Olkin, Cathy B.; Ennico, Kim

    2016-01-01

    NASA's New Horizons spacecraft has revealed that Pluto and Charon exhibit strikingly different surface appearances, despite their similar densities and presumed bulk compositions. Much of Pluto's surface can be attributed to surface-atmosphere interactions and the mobilization of volatile ices by insolation. Many valley systems appear to be the consequence of glaciation involving nitrogen ice. Other geological activity requires or required internal heating. The convection and advection of volatile ices in Sputnik Planum can be powered by present-day radiogenic heat loss. On the other hand, the prominent mountains at the western margin of Sputnik Planum, and the strange, multi-km-high mound features to the south, probably composed of H2O, are young geologically as inferred by light cratering and superposition relationships. Their origin, and what drove their formation so late in Solar System history, is under investigation. The dynamic remolding of landscapes by volatile transport seen on Pluto is not unambiguously evident on Charon. Charon does, however, display a large resurfaced plain and globally engirdling extensional tectonic network attesting to its early endogenic vigor.

  20. On land-use modeling: A treatise of satellite imagery data and misclassification error

    NASA Astrophysics Data System (ADS)

    Sandler, Austin M.

    Recent availability of satellite-based land-use data sets, including data sets with contiguous spatial coverage over large areas, relatively long temporal coverage, and fine-scale land cover classifications, is providing new opportunities for land-use research. However, care must be used when working with these datasets due to misclassification error, which causes inconsistent parameter estimates in the discrete choice models typically used to model land-use. I therefore adapt the empirical correction methods developed for other contexts (e.g., epidemiology) so that they can be applied to land-use modeling. I then use a Monte Carlo simulation, and an empirical application using actual satellite imagery data from the Northern Great Plains, to compare the results of a traditional model ignoring misclassification to those from models accounting for misclassification. Results from both the simulation and application indicate that ignoring misclassification will lead to biased results. Even seemingly insignificant levels of misclassification error (e.g., 1%) result in biased parameter estimates, which alter marginal effects enough to affect policy inference. At the levels of misclassification typical in current satellite imagery datasets (e.g., as high as 35%), ignoring misclassification can lead to systematically erroneous land-use probabilities and substantially biased marginal effects. The correction methods I propose, however, generate consistent parameter estimates and therefore consistent estimates of marginal effects and predicted land-use probabilities.

  1. Fast Episodes of West-Mediterranean-Tyrrhenian Oceanic Opening and Revisited Relations with Tectonic Setting

    PubMed Central

    Savelli, Carlo

    2015-01-01

    Extension and calc-alkaline volcanism of the submerged orogen of alpine age (OAA) initiated in Early Oligocene (~33/32 Ma) and reached the stage of oceanic opening in Early-Miocene (Burdigalian), Late-Miocene and Late-Pliocene. In the Burdigalian (~20–16 Ma) period of widespread volcanism of calcalkaline type on the margins of oceanic domain, seafloor spreading originated the deep basins of north Algeria (western part of OAA) and Sardinia/Provence (European margin). Conversely, when conjugate margins’ volcanism has been absent or scarce seafloor spreading formed the plains Vavilov (7.5–6.3 Ma) and Marsili (1.87–1.67 Ma) within OAA eastern part (Tyrrhenian Sea). The contrast between occurrence and lack of margin’s igneous activity probably implies the diversity of the geotectonic setting at the times of oceanization. It appears that the Burdigalian calcalkaline volcanism on the continental margins developed in the absence of subduction. The WNW-directed subduction of African plate probably commenced at ~16/15 Ma (waning Burdigalian seafloor spreading) after ~18/16 Ma of rifting. Space-time features indicate that calcalkaline volcanism is not linked only to subduction. From this view, temporal gap would exist between the steep subduction beneath the Apennines and the previous, flat-type plunge of European plate with opposite direction producing the OAA accretion and double vergence. PMID:26391973

  2. Negative Stress Margins - Are They Real?

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Lee, Darlene S.; Mohaghegh, Michael

    2011-01-01

    Advances in modeling and simulation, new finite element software, modeling engines and powerful computers are providing opportunities to interrogate designs in a very different manner and in a more detailed approach than ever before. Margins of safety are also often evaluated using local stresses for various design concepts and design parameters quickly once analysis models are defined and developed. This paper suggests that not all the negative margins of safety evaluated are real. The structural areas where negative margins are frequently encountered are often near stress concentrations, point loads and load discontinuities, near locations of stress singularities, in areas having large gradients but with insufficient mesh density, in areas with modeling issues and modeling errors, and in areas with connections and interfaces, in two-dimensional (2D) and three-dimensional (3D) transitions, bolts and bolt modeling, and boundary conditions. Now, more than ever, structural analysts need to examine and interrogate their analysis results and perform basic sanity checks to determine if these negative margins are real.

  3. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  4. Edge Effects along a Seagrass Margin Result in an Increased Grazing Risk on Posidonia australis Transplants.

    PubMed

    Statton, John; Gustin-Craig, Samuel; Dixon, Kingsley W; Kendrick, Gary A

    2015-01-01

    A key issue in habitat restoration are the changes in ecological processes that occur when fragments of habitat are lost, resulting in the persistence of habitat-degraded margins. Margins often create or enhance opportunities for negative plant-herbivore interactions, preventing natural or assisted re-establishment of native vegetation into the degraded area. However, at some distance from the habitat margin these negative interactions may relax. Here, we posit that the intensity of species interactions in a fragmented Posidonia australis seagrass meadow may be spatially dependent on proximity to the seagrass habitat edge, whereby the risk of grazing is high and the probability of survival of seagrass transplants is low. To test this, transplants were planted 2 m within the meadow, on the meadow edge at 0m, and at 2m, 10m, 30m, 50m and 100m distance from the edge of the seagrass meadow into the unvegetated sand sheet. There was an enhanced grazing risk 0-10m from the edge, but decreased sharply with increasing distances (>30m). Yet, the risk of grazing was minimal inside the seagrass meadow, indicating that grazers may use the seagrass meadow for refuge but are not actively grazing within it. The relationship between short-term herbivory risk and long-term survival was not straightforward, suggesting that other environmental filters are also affecting survival of P. australis transplants within the study area. We found that daily probability of herbivory was predictable and operating over a small spatial scale at the edge of a large, intact seagrass meadow. These findings highlight the risk from herbivory can be high, and a potential contributing factor to seagrass establishment in restoration programs.

  5. Edge Effects along a Seagrass Margin Result in an Increased Grazing Risk on Posidonia australis Transplants

    PubMed Central

    Statton, John; Gustin-Craig, Samuel; Dixon, Kingsley W.; Kendrick, Gary A.

    2015-01-01

    A key issue in habitat restoration are the changes in ecological processes that occur when fragments of habitat are lost, resulting in the persistence of habitat-degraded margins. Margins often create or enhance opportunities for negative plant-herbivore interactions, preventing natural or assisted re-establishment of native vegetation into the degraded area. However, at some distance from the habitat margin these negative interactions may relax. Here, we posit that the intensity of species interactions in a fragmented Posidonia australis seagrass meadow may be spatially dependent on proximity to the seagrass habitat edge, whereby the risk of grazing is high and the probability of survival of seagrass transplants is low. To test this, transplants were planted 2 m within the meadow, on the meadow edge at 0m, and at 2m, 10m, 30m, 50m and 100m distance from the edge of the seagrass meadow into the unvegetated sand sheet. There was an enhanced grazing risk 0-10m from the edge, but decreased sharply with increasing distances (>30m). Yet, the risk of grazing was minimal inside the seagrass meadow, indicating that grazers may use the seagrass meadow for refuge but are not actively grazing within it. The relationship between short-term herbivory risk and long-term survival was not straightforward, suggesting that other environmental filters are also affecting survival of P. australis transplants within the study area. We found that daily probability of herbivory was predictable and operating over a small spatial scale at the edge of a large, intact seagrass meadow. These findings highlight the risk from herbivory can be high, and a potential contributing factor to seagrass establishment in restoration programs. PMID:26465926

  6. Productivity, biomass partitioning, and energy yield of low-input short-rotation American sycamore (Platanus occidentalis L.) grown on marginal land: Effects of planting density and simulated drought

    Treesearch

    Jean-Christophe Domec; Elissa Ashley; Milan Fischer; Asko Noormets; Jameson Boone; James C. Williamson; John S. King

    2017-01-01

    Short-rotation woody crops (SRWC) grown for bioenergy production are considered a more sustainable feedstock than food crops such as corn and soybean. However, to be sustainable SRWC should be deployed on land not suitable for agriculture (e.g., marginal lands). Here we quantified productivity and energy yield of four SRWC candidate species grown at different planting...

  7. [Radiological control intraoperatory of a surgical piece in non palpable breast lesions].

    PubMed

    Ruvalcaba Limón, Eva; Espejo Fonseca, Ruby; Bautista Piña, Verónica; Madero Preciado, Luis; Capurso Garcia, Marino; Serratos Garduño, José Eduardo; Hohenstein, Fernando Guisa; Rodríguez Cuevas, Sergio

    2009-09-01

    nonconcrete the mammary injuries are frequent in programs of detection of breast cancer, estereotaxic or ecographic marking is required to realize its split. The intrasurgical radiation control of the surgical piece is indispensable to evaluate the margins of the mammary cancer. to determine the effectiveness of the intrasurgical radiation control of the surgical piece in nonconcrete mammary injuries to diminish the surgical reinterventions to extend margins. women with nonconcrete mammary injuries to those who biopsy by split became, previous marking and intraoperating radiation control of the surgical piece to value margins (suitable margin the same or major of 10 mm, smaller inadequate margin of 10 mm). Intrasurgical reesicion in inadequate radiological margins became. The demographic characteristics, masto-ecographics images, histopathology of the injuries and the radiological-histopatol6gica correlation of the margins studied. Cross-sectional, prospective and descriptive study. 103 patients with 113 nonconcrete mammary injuries included themselves, with age average of 51,35 (32-73) years. In all the injuries the intrasurgical radiation control became of the surgical piece. The prevalence of mammary cancer was of 28.3% (32/113), that corresponds to stellar images (42.8%), suspicious microcalcifications with density (39.2%), microcalcifications (31.2%) and nodules (20%). Of the 32 cancers, 16 had inadequate radiological margins that required intraoperating reescision; suitable histopatologic margins in 100% were obtained (16/16). The 16 (62.5%) cancers without intraoperating reescisi6n by suitable radiological margins had suitable histopatologic margins and 37.5% (6/16) inadequate ones that required surgical reinterventionn to control the margins. The discrepancy between margins was related to microcalcifications in 83.3% of the injuries. the intrasurgical radiation control of the surgical piece is effective to evaluate margins; the intrasurgical reescisión changed inadequate margins to suitable in 50% (16/32) of the cancers; only 18.7% (6/32) of the total of cases required another surgery to control the margins.

  8. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  9. A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses

    ERIC Educational Resources Information Center

    Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini

    2012-01-01

    The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…

  10. Outdoor Recreation Constraints: An Examination of Race, Gender, and Rural Dwelling

    Treesearch

    Cassandra Y. Johnson; J. Michael Bowker; H. Ken Cordell

    2001-01-01

    We assess whether traditionally marginalized groups in American society (African-Americans, women, rural dwellers) perceive more constraints to outdoor recreation participation than other groups. A series of logistic regressions are applied to a national recreation survey and used to model the probability that individuals perceive certain constraints to...

  11. New Approach to Total Dose Specification for Spacecraft Electronics

    NASA Technical Reports Server (NTRS)

    Xapsos, Michael

    2017-01-01

    Variability of the space radiation environment is investigated with regard to total dose specification for spacecraft electronics. It is shown to have a significant impact. A new approach is developed for total dose requirements that replaces the radiation design margin concept with failure probability during a mission.

  12. A primer on marginal effects-part II: health services research applications.

    PubMed

    Onukwugha, E; Bergtold, J; Jain, R

    2015-02-01

    Marginal analysis evaluates changes in a regression function associated with a unit change in a relevant variable. The primary statistic of marginal analysis is the marginal effect (ME). The ME facilitates the examination of outcomes for defined patient profiles or individuals while measuring the change in original units (e.g., costs, probabilities). The ME has a long history in economics; however, it is not widely used in health services research despite its flexibility and ability to provide unique insights. This article, the second in a two-part series, discusses practical issues that arise in the estimation and interpretation of the ME for a variety of regression models often used in health services research. Part one provided an overview of prior studies discussing ME followed by derivation of ME formulas for various regression models relevant for health services research studies examining costs and utilization. The current article illustrates the calculation and interpretation of ME in practice and discusses practical issues that arise during the implementation, including: understanding differences between software packages in terms of functionality available for calculating the ME and its confidence interval, interpretation of average marginal effect versus marginal effect at the mean, and the difference between ME and relative effects (e.g., odds ratio). Programming code to calculate ME using SAS, STATA, LIMDEP, and MATLAB are also provided. The illustration, discussion, and application of ME in this two-part series support the conduct of future studies applying the concept of marginal analysis.

  13. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  14. Dosimetric evaluation of planning target volume margin reduction for prostate cancer via image-guided intensity-modulated radiation therapy

    NASA Astrophysics Data System (ADS)

    Hwang, Taejin; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-07-01

    The aim of this study was to quantitatively estimate the dosimetric benefits of the image-guided radiation therapy (IGRT) system for the prostate intensity-modulated radiation therapy (IMRT) delivery. The cases of eleven patients who underwent IMRT for prostate cancer without a prostatectomy at our institution between October 2012 and April 2014 were retrospectively analyzed. For every patient, clinical target volume (CTV) to planning target volume (PTV) margins were uniformly used: 3 mm, 5 mm, 7 mm, 10 mm, 12 mm, and 15 mm. For each margin size, the IMRT plans were independently optimized by one medical physicist using Pinnalce3 (ver. 8.0.d, Philips Medical System, Madison, WI) in order to maintain the plan quality. The maximum geometrical margin (MGM) for every CT image set, defined as the smallest margin encompassing the rectum at least at one slice, was between 13 mm and 26 mm. The percentage rectum overlapping PTV (%V ROV ), the rectal normal tissue complication probability (NTCP) and the mean rectal dose (%RD mean ) increased in proportion to the increase of PTV margin. However the bladder NTCP remained around zero to some extent regardless of the increase of PTV margin while the percentage bladder overlapping PTV (%V BOV ) and the mean bladder dose (%BD mean ) increased in proportion to the increase of PTV margin. Without relatively large rectum or small bladder, the increase observed for rectal NTCP, %RDmean and %BD mean per 1-mm PTV margin size were 1.84%, 2.44% and 2.90%, respectively. Unlike the behavior of the rectum or the bladder, the maximum dose on each femoral head had little effect on PTV margin. This quantitative study of the PTV margin reduction supported that IG-IMRT has enhanced the clinical effects over prostate cancer with the reduction of normal organ complications under the similar level of PTV control.

  15. On the inequivalence of the CH and CHSH inequalities due to finite statistics

    NASA Astrophysics Data System (ADS)

    Renou, M. O.; Rosset, D.; Martin, A.; Gisin, N.

    2017-06-01

    Different variants of a Bell inequality, such as CHSH and CH, are known to be equivalent when evaluated on nonsignaling outcome probability distributions. However, in experimental setups, the outcome probability distributions are estimated using a finite number of samples. Therefore the nonsignaling conditions are only approximately satisfied and the robustness of the violation depends on the chosen inequality variant. We explain that phenomenon using the decomposition of the space of outcome probability distributions under the action of the symmetry group of the scenario, and propose a method to optimize the statistical robustness of a Bell inequality. In the process, we describe the finite group composed of relabeling of parties, measurement settings and outcomes, and identify correspondences between the irreducible representations of this group and properties of outcome probability distributions such as normalization, signaling or having uniform marginals.

  16. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  17. 3D absolute hypocentral determination - 13 years of seismicity in Ecuadorian subduction zone

    NASA Astrophysics Data System (ADS)

    Font, Yvonne; Segovia, Monica; Theunissen, Thomas

    2010-05-01

    In Ecuador, the Nazca plate is subducting beneath the North Andean Block. This subduction triggered, during the last century, 4 major earthquakes of magnitude greater than 7.7. Between 1994 and 2007, the Geophysical Institute (Escuela National Politecnica, Quito) recorded about 40 000 events in whole Ecuador ranging from Mb 1.5 to 6.9. Unfortunately, the local network shows great density discrepancy between the Coastal and Andean regions where numerous stations were installed to survey volcanic activity. Consequently, seismicity in and around the interplate seismogenic zone - producer of the most destructive earthquakes and tsunamis - is not well constrained. This study aims to improve the location of 13 years seismicity occurred during an interseismic period in order to better localize the seismic deformation and gaps. The first step consists in the construction of a 3D "georealistic" velocity model. Because local tomography cannot provide satisfactory model, we combined all local crustal/lithospheric information on the geometry and velocity properties of different geological units. Those information cover the oceanic Nazca plate and sedimentary coverture the subducting plate dip angle; the North Andean Block margin composed of accreted oceanic plateaus (the Moho depth is approximated using gravity modeling); the metamorphic volcanic chain (oceanic nature for the occidental cordillera and inter-andean valley, continental one for the oriental cordillera); The continental Guyana shield and sedimentary basins. The resulting 3D velocity model extends from 2°N to 6.5°S and 277°E to 283°E and reaches a depth of 300 km. It is discretized in constant velocity blocks of 12 x 12 x 3 km in x, y and z, respectively. The second step consists in selecting an adequate sub-set of seismic stations in order to correct the effect of station density disequilibrium between coastal and volcanic regions. Consequently, we only keep the most representative volcanic stations in terms of azimuthal coverage, record frequency and signal quality. Then, we define 5 domains: Offshore/coast, North-Andean margin, Volcanic chain, Southern Ecuador, and a domain deeper than 50 km. We process earthquake location only if at least 3 proximal stations exist in the event's domain. This data selection allows providing consistent quality location. The third step consists in improving the 3D MAXI technique that is well adapted to perform absolute earthquake location in velocity model presenting strong lateral Vp heterogeneities. The resulting catalogue allows specifying the deformation in the subduction system. All seismicity previously detected before trench occurs indeed between the trench and the coastal range. South of 0°, facing the subducting Carnegie Ridge, the seismicity aligns along the interplate seismogenic zone between an updip limit shallower than ~8 km and a downdip limit that reaches up to 50 km depth. The active seismogenic zone is interrupted by a gap that extends right beneath the coastal range. At these latitudes, a diffuse intraplate deformation also affects the subducting plate, probably induced by the locally thickened lithosphere flexure. Between the trench and the coast, earthquake distribution clearly defines a gap, which size is comparable to the 1942 M7.9 asperity (ellipse of axes ~55/35 km). A slab is clearly defines and dips around 25 to 30°. The slab seismicity is systematically interrupted between 100-170 km, approximately beneath the volcanic chain. North of 0°, i.e. in the megathrust earthquake domain, the interseismic activity is clearly reduced. The interplate distribution seems to gather along alignments perpendicular to the trench attesting probably of the margin segmentation. The North Andean overriding margin is undergoing active deformation, especially at the location where the Andean Chain strike changes of direction. At these latitudes, no earthquake occurs deeper than 100 km depth.

  18. Car accidents induced by a bottleneck

    NASA Astrophysics Data System (ADS)

    Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid

    2017-12-01

    Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.

  19. Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2005-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163

  20. Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2006-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596

  1. Probability density of tunneled carrier states near heterojunctions calculated numerically by the scattering method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wampler, William R.; Myers, Samuel M.; Modine, Normand A.

    2017-09-01

    The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.

  2. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  3. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  4. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    ERIC Educational Resources Information Center

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  5. Optimal moment determination in POME-copula based hydrometeorological dependence modelling

    NASA Astrophysics Data System (ADS)

    Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi

    2017-07-01

    Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.

  6. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  7. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  8. Two tandem queues with general renewal input. 2: Asymptotic expansions for the diffusion model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knessl, C.; Tier, C.

    1999-10-01

    In Part 1 the authors formulated and solved a diffusion model for two tandem queues with exponential servers and general renewal arrivals. They thus obtained the easy traffic diffusion approximation to the steady state joint queue length distribution for this network. Here they study asymptotic and numerical properties of the diffusion approximation. In particular, analytical expressions are obtained for the tail probabilities. Both the joint distribution of the two queues and the marginal distribution of the second queue are considered. They also give numerical illustrations of how this marginal is affected by changes in the arrival and service processes.

  9. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  10. Seabed fluid expulsion along the upper slope and outer shelf of the U.S. Atlantic continental margin

    USGS Publications Warehouse

    Brothers, D.S.; Ruppel, C.; Kluesner, J.W.; ten Brink, Uri S.; Chaytor, J.D.; Hill, J.C.; Andrews, B.D.; Flores, C.

    2014-01-01

    Identifying the spatial distribution of seabed fluid expulsion features is crucial for understanding the substrate plumbing system of any continental margin. A 1100 km stretch of the U.S. Atlantic margin contains more than 5000 pockmarks at water depths of 120 m (shelf edge) to 700 m (upper slope), mostly updip of the contemporary gas hydrate stability zone (GHSZ). Advanced attribute analyses of high-resolution multichannel seismic reflection data reveal gas-charged sediment and probable fluid chimneys beneath pockmark fields. A series of enhanced reflectors, inferred to represent hydrate-bearing sediments, occur within the GHSZ. Differential sediment loading at the shelf edge and warming-induced gas hydrate dissociation along the upper slope are the proposed mechanisms that led to transient changes in substrate pore fluid overpressure, vertical fluid/gas migration, and pockmark formation.

  11. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  12. Fractional Brownian motion with a reflecting wall

    NASA Astrophysics Data System (ADS)

    Wada, Alexander H. O.; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior ˜tα , the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α >1 , the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α <1 , in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  13. Numerical study of the influence of surface reaction probabilities on reactive species in an rf atmospheric pressure plasma containing humidity

    NASA Astrophysics Data System (ADS)

    Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah

    2018-01-01

    The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.

  14. Effects of heterogeneous traffic with speed limit zone on the car accidents

    NASA Astrophysics Data System (ADS)

    Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.

    2016-06-01

    Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.

  15. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  16. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  17. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  18. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  19. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  20. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  1. Approved Methods and Algorithms for DoD Risk-Based Explosives Siting

    DTIC Science & Technology

    2007-02-02

    glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury

  2. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  3. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  4. Flexible chain molecules in the marginal and concentrated regimes: universal static scaling laws and cross-over predictions.

    PubMed

    Laso, Manuel; Karayiannis, Nikos Ch

    2008-05-07

    We present predictions for the static scaling exponents and for the cross-over polymer volumetric fractions in the marginal and concentrated solution regimes. Corrections for finite chain length are made. Predictions are based on an analysis of correlated fluctuations in density and chain length, in a semigrand ensemble in which mers and solvent sites exchange identities. Cross-over volumetric fractions are found to be chain length independent to first order, although reciprocal-N corrections are also estimated. Predicted scaling exponents and cross-over regimes are compared with available data from extensive off-lattice Monte Carlo simulations [Karayiannis and Laso, Phys. Rev. Lett. 100, 050602 (2008)] on freely jointed, hard-sphere chains of average lengths from N=12-500 and at packing densities from dilute ones up to the maximally random jammed state.

  5. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  6. Radiographic Features of Acute Patellar Tendon Rupture.

    PubMed

    Fazal, Muhammad Ali; Moonot, Pradeep; Haddad, Fares

    2015-11-01

    The purpose of our study was to assess soft tissue features of acute patellar tendon rupture on lateral knee radiograph that would facilitate early diagnosis. The participants were divided into two groups of 35 patients each. There were 28 men and seven women with a mean age of 46 years in the control group and 26 men and nine women with a mean age of 47 years in the rupture group. The lateral knee radiograph of each patient was evaluated for Insall-Salvati ratio for patella alta, increased density of the infrapatellar fat pad, appearance of the soft tissue margin of the patellar tendon and bony avulsions. In the rupture group there were three consistent soft tissue radiographic features in addition to patellar alta. These were increased density of infrapatellar fat pad; loss of sharp, well-defined linear margins of the patellar tendon and angulated wavy margin of the patellar tendon while in the control group these features were not observed. The soft tissue radiographic features described in the rupture group are consistent and reliable. When coupled with careful clinical assessment, these will aid in early diagnosis and further imaging will be seldom required. © 2015 Chinese Orthopaedic Association and Wiley Publishing Asia Pty Ltd.

  7. Assessment of undiscovered petroleum resources of the Amerasia Basin Petroleum Province

    USGS Publications Warehouse

    Houseknecht, David W.; Bird, Kenneth J.; Garrity, Christopher P.

    2012-01-01

    The Amerasia Basin Petroleum Province encompasses the Canada Basin and the sediment prisms along the Alaska and Canada margins, outboard from basinward margins (hingelines) of the rift shoulders that formed during extensional opening of the Canada Basin. The province includes the Mackenzie delta and slope, the outer shelves and marine slopes along the Arctic margins of Alaska and Canada, and the deep Canada Basin. The province is divided into four assessment units (AUs): (1) The Canning-Mackenzie deformed margin AU is that part of the rifted margin where the Brooks Range orogenic belt has overridden the rift shoulder and is deforming the rifted-margin prism of sediment outboard of the hingeline. This is the only part of the Amerasia Basin Province that has been explored and—even though more than 3 billion barrels of oil equivalent (BBOE) of oil, gas, and condensate have been discovered—none has been commercially produced. (2) The Alaska passive margin AU is the rifted-margin prism of sediment lying beneath the Beaufort outer shelf and slope that has not been deformed by tectonism. (3) The Canada passive margin AU is the rifted-margin prism of sediment lying beneath the Arctic outer shelf and slope (also known as the polar margin) of Canada that has not been deformed by tectonism. (4) The Canada Basin AU includes the sediment wedge that lies beneath the deep Canada Basin, north of the marine slope developed along the Alaska and Canada margins. Mean estimates of risked, undiscovered, technically recoverable resources include more than 6 billion barrels of oil (BBO), more than 19 trillion cubic feet (TCF) of associated gas, and more than 16 TCF of nonassociated gas in the Canning-Mackenzie deformed margin AU; about 1 BBO, about 3 TCF of associated gas, and about 3 TCF of nonassociated gas in the Alaska passive margin AU; and more than 2 BBO, about 7 TCF of associated gas, and about 8 TCF of nonassociated gas in the Canada passive margin AU. Quantities of natural gas liquids also are assessed in each AU. The Canada Basin AU was not quantitatively assessed because it is judged to hold less than 10 percent probability of containing at least one accumulation of 50 million barrels of oil equivalent.

  8. Integrating resource selection information with spatial capture--recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.

    2013-01-01

    4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.

  9. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    ERIC Educational Resources Information Center

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  10. The Effect of Phonotactic Probability and Neighbourhood Density on Pseudoword Learning in 6- and 7-Year-Old Children

    ERIC Educational Resources Information Center

    van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.

    2016-01-01

    The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…

  11. Circumferential resection margin positivity after preoperative chemoradiotherapy based on magnetic resonance imaging for locally advanced rectal cancer: implication of boost radiotherapy to the involved mesorectal fascia.

    PubMed

    Kim, Kyung Hwan; Park, Min Jung; Lim, Joon Seok; Kim, Nam Kyu; Min, Byung Soh; Ahn, Joong Bae; Kim, Tae Il; Kim, Ho Geun; Koom, Woong Sub

    2016-04-01

    To identify patients who are at a higher risk of pathologic circumferential resection margin involvement using preoperative magnetic resonance imaging. Between October 2008 and November 2012, 165 patients with locally advanced rectal cancer (cT4 or cT3 with <2 mm distance from tumour to mesorectal fascia) who received preoperative chemoradiotherapy were analysed. The morphologic patterns on post-chemoradiotherapy magnetic resonance imaging were categorized into five patterns from Pattern A (most-likely negative pathologic circumferential resection margin) to Pattern E (most-likely positive pathologic circumferential resection margin). In addition, the location of mesorectal fascia involvement was classified as lateral, posterior and anterior. The diagnostic accuracy of the morphologic criteria was calculated using receiver operating characteristic curve analysis. Pathologic circumferential resection margin involvement was identified in 17 patients (10.3%). The diagnostic accuracy of predicting pathologic circumferential resection margin involvement was 0.73 using the five-scale magnetic resonance imaging pattern. The sensitivity, specificity, positive predictive value and negative predictive value for predicting pathologic circumferential resection margin involvement were 76.5, 65.5, 20.3 and 96.0%, respectively, when cut-off was set between Patterns C and D. On multivariate logistic regression, the magnetic resonance imaging patterns D and E (P= 0.005) and posterior or lateral mesorectal fascia involvement (P= 0.017) were independently associated with increased probability of pathologic circumferential resection margin involvement. The rate of pathologic circumferential resection margin involvement was 30.0% when the patient had Pattern D or E with posterior or lateral mesorectal fascia involvement. Patients who are at a higher risk of pathologic circumferential resection margin involvement can be identified using preoperative magnetic resonance imaging although the predictability is moderate. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. New Insights into Passive Margin Development from a Global Deep Seismic Reflection Dataset

    NASA Astrophysics Data System (ADS)

    Bellingham, Paul; Pindell, James; Graham, Rod; Horn, Brian

    2014-05-01

    The kinematic and dynamic evolution of the world's passive margins is still poorly understood. Yet the need to replace reserves, a high oil price and advances in drilling technology have pushed the international oil and gas industry to explore in the deep and ultra-deep waters of the continental margins. To support this exploration and help understand these margins, ION-GXT has acquired, processed and interpreted BasinSPAN surveys across many of the world's passive margins. Observations from these data lead us to consider the modes of subsidence and uplift at both volcanic and non-volcanic margins. At non-volcanic margins, it appears that frequently much of the subsidence post-dates major rifting and is not thermal in origin. Rather the subsidence is associated with extensional displacement on a major fault or shear zone running at least as deep as the continental Moho. We believe that the subsidence is structural and is probably associated with the pinching out (boudinage) of the Lower Crust so that the Upper crust effectively collapses onto the mantle. Eventually this will lead to the exhumation of the sub-continental mantle at the sea bed. Volcanic margins present more complex challenges both in terms of imaging and interpretation. The addition of volcanic and plutonic material into the system and dynamic effects all impact subsidence and uplift. However, we will show some fundamental observations regarding the kinematic development of volcanic margins and especially SDRs which demonstate that the process of collapse and the development of shear zones within and below the crust are also in existence at this type of margin. A model is presented of 'magma welds' whereby packages of SDRs collapse onto an emerging sub-crustal shear zone and it is this collapse which creates the commonly observed SDR geometry. Examples will be shown from East India, Newfoundland, Brazil, Argentina and the Gulf of Mexico.

  13. Inclusion of Radiation Environment Variability in Total Dose Hardness Assurance Methodology

    NASA Technical Reports Server (NTRS)

    Xapsos, M. A.; Stauffer, C.; Phan, A.; McClure, S. S.; Ladbury, R. L.; Pellish, J. A.; Campola, M. J.; LaBel, K. A.

    2015-01-01

    Variability of the space radiation environment is investigated with regard to parts categorization for total dose hardness assurance methods. It is shown that it can have a significant impact. A modified approach is developed that uses current environment models more consistently and replaces the design margin concept with one of failure probability.

  14. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  15. New tax law hobbles tax-exempt hospitals.

    PubMed

    Goldblatt, S J

    1982-03-01

    The Economic Recovery Tax Act of 1981 left tax-exempt hospitals at a significant disadvantage in the competition for capital. Although the new law's accelerated depreciation schedules and liberalized investment tax credits contain some marginal benefits for tax-exempt hospitals, these benefits are probably more than offset by the impact of the law on charitable giving.

  16. Large margin nearest neighbor classifiers.

    PubMed

    Domeniconi, Carlotta; Gunopulos, Dimitrios; Peng, Jing

    2005-07-01

    The nearest neighbor technique is a simple and appealing approach to addressing classification problems. It relies on the assumption of locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with a finite number of examples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. The employment of a locally adaptive metric becomes crucial in order to keep class conditional probabilities close to uniform, thereby minimizing the bias of estimates. We propose a technique that computes a locally flexible metric by means of support vector machines (SVMs). The decision function constructed by SVMs is used to determine the most discriminant direction in a neighborhood around the query. Such a direction provides a local feature weighting scheme. We formally show that our method increases the margin in the weighted space where classification takes place. Moreover, our method has the important advantage of online computational efficiency over competing locally adaptive techniques for nearest neighbor classification. We demonstrate the efficacy of our method using both real and simulated data.

  17. Mental health status and healthcare utilization among community dwelling older adults.

    PubMed

    Adepoju, Omolola; Lin, Szu-Hsuan; Mileski, Michael; Kruse, Clemens Scott; Mask, Andrew

    2018-04-27

    Shifts in mental health utilization patterns are necessary to allow for meaningful access to care for vulnerable populations. There have been long standing issues in how mental health is provided, which has caused problems in that care being efficacious for those seeking it. To assess the relationship between mental health status and healthcare utilization among adults ≥65 years. A negative binomial regression model was used to assess the relationship between mental health status and healthcare utilization related to office-based physician visits, while a two-part model, consisting of logistic regression and negative binomial regression, was used to separately model emergency visits and inpatient services. The receipt of care in office-based settings were marginally higher for subjects with mental health difficulties. Both probabilities and counts of inpatient hospitalizations were similar across mental health categories. The count of ER visits was similar across mental health categories; however, the probability of having an emergency department visit was marginally higher for older adults who reported mental health difficulties in 2012. These findings are encouraging and lend promise to the recent initiatives on addressing gaps in mental healthcare services.

  18. Structure and regional significance of the Late Permian(?) Sierra Nevada - Death Valley thrust system, east-central California

    USGS Publications Warehouse

    Stevens, C.H.; Stone, P.

    2005-01-01

    An imbricate system of north-trending, east-directed thrust faults of late Early Permian to middle Early Triassic (most likely Late Permian) age forms a belt in east-central California extending from the Mount Morrison roof pendant in the eastern Sierra Nevada to Death Valley. Six major thrust faults typically with a spacing of 15-20 km, original dips probably of 25-35??, and stratigraphic throws of 2-5 km compose this structural belt, which we call the Sierra Nevada-Death Valley thrust system. These thrusts presumably merge into a de??collement at depth, perhaps at the contact with crystalline basement, the position of which is unknown. We interpret the deformation that produced these thrusts to have been related to the initiation of convergent plate motion along a southeast-trending continental margin segment probably formed by Pennsylvanian transform truncation. This deformation apparently represents a period of tectonic transition to full-scale convergence and arc magmatism along the continental margin beginning in the Late Triassic in central California. ?? 2005 Elsevier B.V. All rights reserved.

  19. Cenozoic Source-to-Sink of the African margin of the Equatorial Atlantic

    NASA Astrophysics Data System (ADS)

    Rouby, Delphine; Chardon, Dominique; Huyghe, Damien; Guillocheau, François; Robin, Cecile; Loparev, Artiom; Ye, Jing; Dall'Asta, Massimo; Grimaud, Jean-Louis

    2016-04-01

    The objective of the Transform Source to Sink Project (TS2P) is to link the dynamics of the erosion of the West African Craton to the offshore sedimentary basins of the African margin of the Equatorial Atlantic at geological time scales. This margin, alternating transform and oblique segments from Guinea to Nigeria, shows a strong structural variability in the margin width, continental geology and relief, drainage networks and subsidence/accumulation patterns. We analyzed this system combining onshore geology and geomorphology as well as offshore sub-surface data. Mapping and regional correlation of dated lateritic paleo-landscape remnants allows us to reconstruct two physiographic configurations of West Africa during the Cenozoic. We corrected those reconstitutions from flexural isostasy related to the subsequent erosion. These geometries show that the present-day drainage organization stabilized by at least 29 Myrs ago (probably by 34 Myr) revealing the antiquity of the Senegambia, Niger and Volta catchments toward the Atlantic as well as of the marginal upwarp currently forming a continental divide. The drainage rearrangement that lead to this drainage organization was primarily enhanced by the topographic growth of the Hoggar swell and caused a major stratigraphic turnover along the Equatorial margin of West Africa. Elevation differences between paleo-landscape remnants give access to the spatial and temporal distribution of denudation for 3 time-increments since 45 Myrs. From this, we estimate the volumes of sediments and associated lithologies exported by the West African Craton toward different segments of the margin, taking into account the type of eroded bedrock and the successive drainage reorganizations. We compare these data to Cenozoic accumulation histories in the basins and discuss their stratigraphic expression according to the type of margin segment they are preserved in.

  20. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  1. Assessing hypotheses about nesting site occupancy dynamics

    USGS Publications Warehouse

    Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle

    2011-01-01

    Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.

  2. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data

    PubMed Central

    Serang, Oliver; MacCoss, Michael J.; Noble, William Stafford

    2010-01-01

    The problem of identifying proteins from a shotgun proteomics experiment has not been definitively solved. Identifying the proteins in a sample requires ranking them, ideally with interpretable scores. In particular, “degenerate” peptides, which map to multiple proteins, have made such a ranking difficult to compute. The problem of computing posterior probabilities for the proteins, which can be interpreted as confidence in a protein’s presence, has been especially daunting. Previous approaches have either ignored the peptide degeneracy problem completely, addressed it by computing a heuristic set of proteins or heuristic posterior probabilities, or by estimating the posterior probabilities with sampling methods. We present a probabilistic model for protein identification in tandem mass spectrometry that recognizes peptide degeneracy. We then introduce graph-transforming algorithms that facilitate efficient computation of protein probabilities, even for large data sets. We evaluate our identification procedure on five different well-characterized data sets and demonstrate our ability to efficiently compute high-quality protein posteriors. PMID:20712337

  3. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  4. On Schrödinger's bridge problem

    NASA Astrophysics Data System (ADS)

    Friedland, S.

    2017-11-01

    In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.

  5. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    PubMed

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  6. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification

    PubMed Central

    Feng, Yang; Jiang, Jiancheng; Tong, Xin

    2015-01-01

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing. PMID:27185970

  7. A New Monte Carlo Method for Estimating Marginal Likelihoods.

    PubMed

    Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O

    2018-06-01

    Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.

  8. Density probability distribution functions of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2008-10-01

    In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  9. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  10. Effects of genotype and population density on growth performance, carcass characteristics, and cost-benefits of broiler chickens in north central Nigeria.

    PubMed

    Yakubu, Abdulmojeed; Ayoade, John A; Dahiru, Yakubu M

    2010-04-01

    The influence of genotype and stocking densities on growth performance, carcass qualities, and cost-benefits of broilers were examined in a 28-day trial. Two hundred and seven 4-week-old birds each of Anak Titan and Arbor Acre hybrid broiler types were randomly assigned to three stocking density treatments of 8.3, 11.1, and 14.3 birds/m(2) in a 2 x 3 factorial arrangement. Final body weight, average weekly body weight and average weekly feed intake were affected (P < 0.05) by strain, with higher means recorded for Arbor Acres. However, average weekly body weight gain and feed conversion ratio were similar (P > 0.05) in both genetic groups. The effect of placement density on some growth parameters did not follow a linear trend. Arbor Acres had significantly (P < 0.05) higher relative (%) fasted body, carcass, back, neck, and wing weights compared to Anak Titans. Housing density effect (P < 0.05) was observed for relative (%) fasted body, shank, and wing weights of birds. However, the relative weights of visceral organs of birds were not significantly (P > 0.05) influenced by genotype and housing density. The economic analysis revealed that higher gross margin was recorded for Arbor Acres compared to Anak Titans (euro 2.76 versus euro 2.19; P < 0.05, respectively). Conversely, stocking rate did not exert any influence (P > 0.05) on profit margin. Genotype x stocking density interaction effect was significant for some of the carcass indices investigated. It is concluded that under sub-humid conditions of a tropical environment, the use of Arbor Acre genetic type as well as a placement density of 14.3 birds/m(2) appeared to be more profitable.

  11. Decadal to centennial oscillations in the upper and lower boundaries of the San Diego, California margin Oxygen Minimum Zone

    NASA Astrophysics Data System (ADS)

    Myhre, S. E.; Hill, T. M.; Frieder, C.; Grupe, B.

    2016-02-01

    Here we present two new marine sediment archives from the continental margin of San Diego, California, USA, which record decadal to centennial oscillations in the hydrographic structure of the Eastern Pacific Oxygen Minimum Zone (OMZ). The two cores, located at 528 and 1,180 m water depth, record oceanographic history across overlapping timescales. Biotic communities, including Foraminifera, Echinodermata, Brachiopoda, Mollusca and Ostrocoda, were examined in subsurface (>10 cm sediment core depth) samples. Chronologies for both cores were developed with reservoir-corrected 14C dates of mixed planktonic Foraminifera and linearly interpolated sedimentation rates. Sediment ages for the cores range from 400-1,800 years before present. Indices of foraminiferal community density, diversity and evenness are applied as biotic proxies to track the intensification of the continental margin OMZ. Biotic communities at the shallower site reveal multi-decadal to centennial timescales of OMZ intensification, whereas the deeper site exhibits decadal to multi-decadal scales of hydrographic variability. Hypoxia-associated foraminiferal genera Uvigerina and Bolivina were compositionally dominant during intervals of peak foraminiferal density. Invertebrate assemblages often co-occurred across taxa groups, and thereby provide a broad trophic context for interpreting changes in the margin seafloor. Variability in the advection of Pacific Equatorial Water may mechanistically contribute to this described hydrographic variability. This investigation reconstructs historical timescales of OMZ intensification, seafloor ecological variability, and synchrony between open-ocean processes and regional climate.

  12. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  13. Marginal Structural Models for Case-Cohort Study Designs to Estimate the Association of Antiretroviral Therapy Initiation With Incident AIDS or Death

    PubMed Central

    Cole, Stephen R.; Hudgens, Michael G.; Tien, Phyllis C.; Anastos, Kathryn; Kingsley, Lawrence; Chmiel, Joan S.; Jacobson, Lisa P.

    2012-01-01

    To estimate the association of antiretroviral therapy initiation with incident acquired immunodeficiency syndrome (AIDS) or death while accounting for time-varying confounding in a cost-efficient manner, the authors combined a case-cohort study design with inverse probability-weighted estimation of a marginal structural Cox proportional hazards model. A total of 950 adults who were positive for human immunodeficiency virus type 1 were followed in 2 US cohort studies between 1995 and 2007. In the full cohort, 211 AIDS cases or deaths occurred during 4,456 person-years. In an illustrative 20% random subcohort of 190 participants, 41 AIDS cases or deaths occurred during 861 person-years. Accounting for measured confounders and determinants of dropout by inverse probability weighting, the full cohort hazard ratio was 0.41 (95% confidence interval: 0.26, 0.65) and the case-cohort hazard ratio was 0.47 (95% confidence interval: 0.26, 0.83). Standard multivariable-adjusted hazard ratios were closer to the null, regardless of study design. The precision lost with the case-cohort design was modest given the cost savings. Results from Monte Carlo simulations demonstrated that the proposed approach yields approximately unbiased estimates of the hazard ratio with appropriate confidence interval coverage. Marginal structural model analysis of case-cohort study designs provides a cost-efficient design coupled with an accurate analytic method for research settings in which there is time-varying confounding. PMID:22302074

  14. Tumor control probability reduction in gated radiotherapy of non-small cell lung cancers: a feasibility study.

    PubMed

    Siochi, R Alfredo; Kim, Yusung; Bhatia, Sudershan

    2014-10-16

    We studied the feasibility of evaluating tumor control probability (TCP) reductions for tumor motion beyond planned gated radiotherapy margins. Tumor motion was determined from cone-beam CT projections acquired for patient setup, intrafraction respiratory traces, and 4D CTs for five non-small cell lung cancer (NSCLC) patients treated with gated radiotherapy. Tumors were subdivided into 1 mm sections whose positions and doses were determined for each beam-on time point. (The dose calculation model was verified with motion phantom measurements.) The calculated dose distributions were used to generate the treatment TCPs for each patient. The plan TCPs were calculated from the treatment planning dose distributions. The treatment TCPs were compared to the plan TCPs for various models and parameters. Calculated doses matched phantom measurements within 0.3% for up to 3 cm of motion. TCP reductions for excess motion greater than 5mm ranged from 1.7% to 11.9%, depending on model parameters, and were as high as 48.6% for model parameters that simulated an individual patient. Repeating the worst case motion for all fractions increased TCP reductions by a factor of 2 to 3, while hypofractionation decreased these reductions by as much as a factor of 3. Treatment motion exceeding gating margins by more than 5 mm can lead to considerable TCP reductions. Appropriate margins for excess motion are recommended, unless applying daily tumor motion verification and adjusting thegating window.

  15. Passive margins getting squeezed in the mantle convection vice

    NASA Astrophysics Data System (ADS)

    Husson, Laurent; Yamato, Philippe; Becker, Thorsten; Pedoja, Kevin

    2013-04-01

    Quaternary coastal geomorphology reveals that passive margins underwent wholesale uplift at least during the glacial cycle. In addition, these not-so-passive margins often exhibit long term exhumation and tectonic inversion, which suggest that compression and tectonic shortening could be the mechanism that triggers their overall uplift. We speculate that the compression in the lithosphere gradually increased during the Cenozoic. The many mountain belts at active margins that accompany this event readily witness this increase. Less clear is how that compression increase affects passive margins. In order to address this issue, we design minimalist 2D viscous models to quantify the impact of plate collision on the stress regime. In these models, a sluggish plate is disposed on a less viscous mantle. It is driven by a "mantle conveyor belt" alternatively excited by lateral shear stresses that represent a downwelling on one side, an upwelling on the other side, or both simultaneously. The lateral edges of the plate are either free or fixed, respectively representing the cases of free convergence and collision. In practice, it dramatically changes the upper boundary condition for mantle circulation and subsequently, for the stress field. The flow pattern transiently evolves almost between two end-members, starting from a situation close to a Couette flow to a pattern that looks like a Poiseuille flow with an almost null velocity at the surface (though in the models, the horizontal velocity at the surface is not strictly null, as the lithosphere deforms). In the second case, the lithosphere is highly stressed horizontally and deforms. For an equivalent bulk driving force, compression increases drastically at passive margins if upwellings are active because they push plates towards the collision. Conversely, if only downwellings are activated, compression occurs on one half of the plate and extension on the other half, because only the downwelling is pulling the plate. Thus, active upwellings underneath oceanic plates are required to explain compression at passive margins. This conclusion is corroborated by "real-Earth" 3D spherical models, wherein the flow is alternatively driven by density anomalies inferred from seismic tomography -and therefore include both downwellings at subduction zones and upwellings above the superswells- and density anomalies that correspond to subducting slabs only. While the second scenario mostly compresses the active margins of upper plates and leave other areas at rest, the first scenario efficiently compresses passive margins where the geological record reveals their uplift, exhumation, and tectonic inversion.

  16. Size distribution of submarine landslides along the U.S. Atlantic margin

    USGS Publications Warehouse

    Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.

    2009-01-01

    Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.

  17. Local response of a glacier to annual filling and drainage of an ice-marginal lake

    USGS Publications Warehouse

    Walder, J.S.; Trabant, D.C.; Cunico, M.; Fountain, A.G.; Anderson, S.P.; Anderson, R. Scott; Malm, A.

    2006-01-01

    Ice-marginal Hidden Creek Lake, Alaska, USA, outbursts annually over the course of 2-3 days. As the lake fills, survey targets on the surface of the 'ice dam' (the glacier adjacent to the lake) move obliquely to the ice margin and rise substantially. As the lake drains, ice motion speeds up, becomes nearly perpendicular to the face of the ice dam, and the ice surface drops. Vertical movement of the ice dam probably reflects growth and decay of a wedge of water beneath the ice dam, in line with established ideas about jo??kulhlaup mechanics. However, the distribution of vertical ice movement, with a narrow (50-100 m wide) zone where the uplift rate decreases by 90%, cannot be explained by invoking flexure of the ice dam in a fashion analogous to tidal flexure of a floating glacier tongue or ice shelf. Rather, the zone of large uplift-rate gradient is a fault zone: ice-dam deformation is dominated by movement along high-angle faults that cut the ice dam through its entire thickness, with the sense of fault slip reversing as the lake drains. Survey targets spanning the zone of steep uplift gradient move relative to one another in a nearly reversible fashion as the lake fills and drains. The horizontal strain rate also undergoes a reversal across this zone, being compressional as the lake fills, but extensional as the lake drains. Frictional resistance to fault-block motion probably accounts for the fact that lake level falls measurably before the onset of accelerated horizontal motion and vertical downdrop. As the overall fault pattern is the same from year to year, even though ice is lost by calving, the faults must be regularly regenerated, probably by linkage of surface and bottom crevasses as ice is advected toward the lake basin.

  18. The effect of on-line position correction on the dose distribution in focal radiotherapy for bladder cancer

    PubMed Central

    van Rooijen, Dominique C; van de Kamer, Jeroen B; Pool, René; Hulshof, Maarten CCM; Koning, Caro CE; Bel, Arjan

    2009-01-01

    Background The purpose of this study was to determine the dosimetric effect of on-line position correction for bladder tumor irradiation and to find methods to predict and handle this effect. Methods For 25 patients with unifocal bladder cancer intensity modulated radiotherapy (IMRT) with 5 beams was planned. The requirement for each plan was that 99% of the target volume received 95% of the prescribed dose. Tumor displacements from -2.0 cm to 2.0 cm in each dimension were simulated, using 0.5 cm increments, resulting in 729 simulations per patient. We assumed that on-line correction for the tumor was applied perfectly. We determined the correlation between the change in D99% and the change in path length, which is defined here as the distance from the skin to the isocenter for each beam. In addition the margin needed to avoid underdosage was determined and the probability that an underdosage occurs in a real treatment was calculated. Results Adjustments for tumor displacement with perfect on-line position correction resulted in an altered dose distribution. The altered fraction dose to the target varied from 91.9% to 100.4% of the prescribed dose. The mean D99% (± SD) was 95.8% ± 1.0%. There was a modest linear correlation between the difference in D99% and the change in path length of the beams after correction (R2 = 0.590). The median probability that a systematic underdosage occurs in a real treatment was 0.23% (range: 0 - 24.5%). A margin of 2 mm reduced that probability to < 0.001% in all patients. Conclusion On-line position correction does result in an altered target coverage, due to changes in average path length after position correction. An extra margin can be added to prevent underdosage. PMID:19775479

  19. Computation of marginal distributions of peak-heights in electropherograms for analysing single source and mixture STR DNA samples.

    PubMed

    Cowell, Robert G

    2018-05-04

    Current models for single source and mixture samples, and probabilistic genotyping software based on them used for analysing STR electropherogram data, assume simple probability distributions, such as the gamma distribution, to model the allelic peak height variability given the initial amount of DNA prior to PCR amplification. Here we illustrate how amplicon number distributions, for a model of the process of sample DNA collection and PCR amplification, may be efficiently computed by evaluating probability generating functions using discrete Fourier transforms. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Fractional Brownian motion with a reflecting wall.

    PubMed

    Wada, Alexander H O; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  1. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  2. On the modelling of scalar and mass transport in combustor flows

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; So, R. M. C.

    1989-01-01

    Results are presented of a numerical study of swirling and nonswirling combustor flows with and without density variations. Constant-density arguments are used to justify closure assumptions invoked for the transport equations for turbulent momentum and scalar fluxes, which are written in terms of density-weighted variables. Comparisons are carried out with measurements obtained from three different axisymmetric model combustor experiments covering recirculating flow, swirling flow, and variable-density swirling flow inside the model combustors. Results show that the Reynolds stress/flux models do a credible job of predicting constant-density swirling and nonswirling combustor flows with passive scalar transport. However, their improvements over algebraic stress/flux models are marginal. The extension of the constant-density models to variable-density flow calculations shows that the models are equally valid for such flows.

  3. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  4. The McDermitt Caldera, NV-OR, USA: Geologic mapping, volcanology, mineralization, and high precision 40Ar/39Ar dating of early Yellowstone hotspot magmatism

    NASA Astrophysics Data System (ADS)

    Henry, C. D.; Castor, S. B.; Starkel, W. A.; Ellis, B. S.; Wolff, J. A.; Heizler, M. T.; McIntosh, W. C.

    2012-12-01

    The irregularly keyhole-shaped, 40x30 to 22 km, McDermitt caldera formed at 16.35±0.03 Ma (n=4; Fish Canyon sanidine = 28.201 Ma) during eruption of a zoned, aphyric, mildly peralkaline rhyolite to abundantly anorthoclase-phyric, metaluminous dacite (McDermitt Tuff, MDT). Intracaldera MDT is locally strongly rheomorphic and, where MDT and caldera floor are well-exposed along the western margin, contains abundant megabreccia but is a maximum of ~450 m thick. If this thickness is representative of the caldera, intracaldera MDT has a volume of ~400 km3. Outflow MDT is currently known up to 13 km south of the caldera but only 3 km north of the caldera. Maximum outflow thickness is ~100 m, and outflow volume is probably no more than about 10% that of intracaldera MDT. The thickness and volume relations indicate collapse began very early during eruption, and most tuff ponded within the caldera. Outflow is strongly rheomorphic where draped over paleotopography. Late, undated icelandite lavas and domes are probably residual magma from the caldera chamber. Resurgence is expressed as both a broad, symmetrical dome in the north part and a fault-bound uplift in the south part of the caldera. Mineralization associated with the caldera includes Zr-rich U deposits that are indistinguishable in age with the McDermitt Tuff, Hg, Au, Ga, and Li-rich intracaldera tuffaceous sediments. Although formed during probable regional extension, the caldera is flat-lying and cut only at its west and east margins by much younger, high-angle normal faults. The caldera formed in an area of highly diverse Cenozoic volcanic rocks. The oldest are 39 and 46 Ma metaluminous dacite lavas along the northwest margin. Coarsely plagioclase-phyric to aphyric Steens Basalt lavas crop out around the west, northwest, and northeast margin. An anorthoclase-phyric, low-Si rhyolite lava (16.69±0.02 Ma) that is interbedded with probable Steens lavas northeast of the caldera and a biotite rhyolite lava dome (16.62±0.02 Ma) in the west floor of the caldera are the oldest middle Miocene silicic rocks near the caldera. Other pre-caldera rocks are a mix of variably peralkaline, distal ignimbrites; biotite rhyolite domes and lavas; and variably peralkaline rhyolite lavas that were emplaced between about 16.50 and 16.36 Ma. Silicic volcanism around the McDermitt caldera is some of the oldest of the Yellowstone hotspot track, but two known calderas in NW Nevada and unidentified sources of distal ignimbrites near McDermitt are older than the McDermitt caldera. Initial hotspot silicic volcanism occurred over a large area across NW Nevada, SE Oregon, and SW Idaho.

  5. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.

    2011-01-01

    Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.

  6. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne

    2011-01-01

    Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.

  7. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  8. Effects of environmental covariates and density on the catchability of fish populations and interpretation of catch per unit effort trends

    USGS Publications Warehouse

    Korman, Josh; Yard, Mike

    2017-01-01

    Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.

  9. Wavefronts, actions and caustics determined by the probability density of an Airy beam

    NASA Astrophysics Data System (ADS)

    Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón

    2018-07-01

    The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.

  10. Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.

    PubMed

    Guo, Lian; Radisic, Aleksandar; Searson, Peter C

    2005-12-22

    Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.

  11. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  12. Iso-risk air no decompression limits after scoring marginal decompression sickness cases as non-events.

    PubMed

    Murphy, F Gregory; Swingler, Ashleigh J; Gerth, Wayne A; Howle, Laurens E

    2018-01-01

    Decompression sickness (DCS) in humans is associated with reductions in ambient pressure that occur during diving, aviation, or certain manned spaceflight operations. Its signs and symptoms can include, but are not limited to, joint pain, radiating abdominal pain, paresthesia, dyspnea, general malaise, cognitive dysfunction, cardiopulmonary dysfunction, and death. Probabilistic models of DCS allow the probability of DCS incidence and time of occurrence during or after a given hyperbaric or hypobaric exposure to be predicted based on how the gas contents or gas bubble volumes vary in hypothetical tissue compartments during the exposure. These models are calibrated using data containing the pressure and respired gas histories of actual exposures, some of which resulted in DCS, some of which did not, and others in which the diagnosis of DCS was not clear. The latter are referred to as marginal DCS cases. In earlier works, a marginal DCS event was typically weighted as 0.1, with a full DCS event being weighted as 1.0, and a non-event being weighted as 0.0. Recent work has shown that marginal DCS events should be weighted as 0.0 when calibrating gas content models. We confirm this indication in the present work by showing that such models have improved performance when calibrated to data with marginal DCS events coded as non-events. Further, we investigate the ramifications of derating marginal events on model-prescribed air diving no-stop limits. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. On the initiation of subduction zones

    NASA Astrophysics Data System (ADS)

    Cloetingh, Sierd; Wortel, Rinus; Vlaar, N. J.

    1989-03-01

    Analysis of the relation between intraplate stress fields and lithospheric rheology leads to greater insight into the role that initiation of subduction plays in the tectonic evolution of the lithosphere. Numerical model studies show that if after a short evolution of a passive margin (time span a few tens of million years) subduction has not yet started, continued aging of the passive margin alone does not result in conditions more favorable for transformation into an active margin. Although much geological evidence is available in supporting the key role small ocean basins play in orogeny and ophiolite emplacement, evolutionary frameworks of the Wilson cycle usually are cast in terms of opening and closing of wide ocean basins. We propose a more limited role for large oceans in the Wilson cycle concept. In general, initiation of subduction at passive margins requires the action of external plate-tectonic forces, which will be most effective for young passive margins prestressed by thick sedimentary loads. It is not clear how major subduction zones (such as those presently ringing the Pacific Basin) form but it is unlikely they form merely by aging of oceanic lithosphere. Conditions likely to exist in very young oceanic regions are quite favorable for the development of subduction zones, which might explain the lack of preservation of back-arc basins and marginal seas. Plate reorganizations probably occur predominantly by the formation of new spreading ridges, because stress relaxation in the lithosphere takes place much more efficiently through this process than through the formation of new subduction zones.

  14. Carbonate mound development in contrasting settings on the Irish margin

    NASA Astrophysics Data System (ADS)

    van der Land, Cees; Eisele, Markus; Mienis, Furu; de Haas, Henk; Hebbeln, Dierk; Reijmer, John J. G.; van Weering, Tjeerd C. E.

    2014-01-01

    Cold-water coral carbonate mounds, formed by framework building cold-water corals, are found in several mound provinces on the Irish margin. Differences in cold-water coral mound development rates and sediment composition between mounds at the southwest Rockall Trough margin and the Galway Mound in the Porcupine Seabight are investigated. Variations in sediment composition in the two mound provinces are related to the local environmental conditions and sediment sources. Mound accumulation rates are possibly higher at the Galway Mound probably due to a higher influx of hemipelagic fine grained non-carbonate sediments. In both cold-water coral mound areas, mound growth has been continuous for the last ca 11,000 years, before this period several hiatuses and unconformities exist in the mound record. The most recent unconformity can be correlated across multiple mounds and mound provinces at the Irish margin on the basis of apparent age. On the southwest Rockall Trough margin these hiatuses/unconformities are associated with post-depositional aragonite dissolution in, and lithification of, certain intervals, while at Galway Mound no lithification occurs. This study revealed that the influx and types of material transported to cold-water coral mounds may have a direct impact on the carbonate mound accumulation rate and on post-depositional processes. Significantly, the Logachev Mounds on the SW Rockall Trough margin accumulate slower but, because they contain lithified layers, are less susceptible to erosion. This net effect may account for their larger size compared to the Belgica Mounds.

  15. Mercury profiles in sediment from the marginal high of Arabian Sea: an indicator of increasing anthropogenic Hg input.

    PubMed

    Chakraborty, Parthasarathi; Vudamala, Krushna; Chennuri, Kartheek; Armoury, Kazip; Linsy, P; Ramteke, Darwin; Sebastian, Tyson; Jayachandran, Saranya; Naik, Chandan; Naik, Richita; Nath, B Nagender

    2016-05-01

    Total Hg distributions and its speciation were determined in two sediment cores collected from the western continental marginal high of India. Total Hg content in the sediment was found to gradually increase (by approximately two times) towards the surface in both the cores. It was found that Hg was preferentially bound to sulfide under anoxic condition. However, redox-mediated reactions in the upper part of the core influenced the total Hg content in the sediment cores. This study suggests that probable increase in authigenic and allogenic Hg deposition attributed to the increasing Hg concentration in the surface sediment in the study area.

  16. On the formation of granulites

    USGS Publications Warehouse

    Bohlen, S.R.

    1991-01-01

    The tectonic settings for the formation and evolution of regional granulite terranes and the lowermost continental crust can be deduced from pressure-temperature-time (P-T-time) paths and constrained by petrological and geophysical considerations. P-T conditions deduced for regional granulites require transient, average geothermal gradients of greater than 35??C km-1, implying minimum heat flow in excess of 100 mW m-2. Such high heat flow is probably caused by magmatic heating. Tectonic settings wherein such conditions are found include convergent plate margins, continental rifts, hot spots and at the margins of large, deep-seated batholiths. Cooling paths can be constrained by solid-solid and devolatilization equilibria and geophysical modelling. -from Author

  17. Effect of wild flowers on oviposition of Hippodamia variegata (Coleoptera: Coccinellidae) in the laboratory.

    PubMed

    Bertolaccini, Isabel; Núñez-Pérez, Etelvina; Tizado, Emilio Jorge

    2008-12-01

    Marginal vegetation in crops is very important for natural enemies and their pest control capacity. The effects of Brassica nigra L. (Brassicaceae), Daucus carota L. (Apiaceae), and Sonchus oleraceous L. (Asteraceae) flowers as supplemental food on the number of eggs laid during 7 d and on the preoviposition time in Hippodamia variegata (Goeze, 1777) were studied in the laboratory under conditions of several densities of Acyrthosiphon pisum (Harris, 1776). The results show the presence of flowers of Brassica and Sonchus increased egg production 1.44X and doubled the pre-oviposition period (2.13X). This suggests that the availability of flowers of Brassica and Sonchus as supplemental foods (pollen and nectar) in the marginal vegetation of crops can serve to improve reproductive performance of H. variegata, specifically under conditions of prey limitation. Thus, the increase in fitness of this predator allows a better response to changes in pest density.

  18. [Specific features of nesting bird populations in forest-meadow-field landscapes of Meshchovsk Opolye reflect the diversity of their biotope connections].

    PubMed

    Kut'in, S D; Konstantinov, V M

    2008-01-01

    Studies on specific features of nesting bird populations in patchy landscapes were performed in Meshchovsk Opolye, Kaluga Region, from 1981 to 1990. Indices of similarity between the avifaunas of agricultural fields, lowland bogs, and small-leaved forests markedly differed from parameters of their population density in rank and value. In the series of biotopes differing in the relative amount of woodland, from central areas of small-leaved forests to forest margins and then to forest islands gradually decreasing in size, the birds segregated into two distinct groups, one characteristic of forest margins and large forest islands and the other characteristic of small and very small forest islands. Specific features of bird density distribution in forest-meadow-field landscapes of Meshchovsk Opolye reflected heterogeneity of their populations manifested in diverse connections with nesting biotopes.

  19. Oak regeneration and overstory density in the Missouri Ozarks

    Treesearch

    David R. Larsen; Monte A. Metzger

    1997-01-01

    Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...

  20. Geophysical evidence for the extent of crustal types and the type of margin along a profile in the northeastern Baffin Bay

    NASA Astrophysics Data System (ADS)

    Altenbernd, Tabea; Jokat, Wilfried; Heyde, Ingo; Damm, Volkmar

    2015-11-01

    Investigating the crust of northern Baffin Bay provides valuable indications for the still debated evolution of this area. The crust of the southern Melville Bay is examined based on wide-angle seismic and gravity data. The resulting P wave velocity, density, and geological models give insights into the crustal structure. A stretched and rifted continental crust underneath southern Melville Bay is up to 30 km thick, with crustal velocities ranging between 5.5 and 6.9 km/s. The deep Melville Bay Graben contains a 9 km thick infill with velocities of 4 to 5.2 km/s in its lowermost part. West of the Melville Bay Ridge, a ~80 km wide and partly only 5 km thick Continent-Ocean Transition (COT) is present. West of the COT, up to 5 km thick sedimentary layers cover a 4.3 to 7 km thick, two-layered oceanic crust. The upper oceanic layer 2 has velocities of 5.2 to 6.0 km/s; the oceanic layer 3 has been modeled with rather low velocities of 6.3 to 6.9 km/s. Low velocities of 7.8 km/s characterize the probably serpentinized upper mantle underneath the thin crust. The serpentinized upper mantle and low thickness of the oceanic crust are another indication for slow or ultraslow spreading during the formation of the oceanic part of the Baffin Bay. By comparing our results on the crustal structure with other wide-angle seismic profiles recently published, differences in the geometry and structure of the crust and the overlying sedimentary cover are revealed. Moreover, the type of margin and the extent of crustal types in the Melville Bay area are discussed.

  1. Widespread methane leakage from the sea floor on the northern US Atlantic margin

    USGS Publications Warehouse

    Skarke, Adam; Ruppel, Carolyn; Kodis, Mali'o; Brothers, Daniel S.; Lobecker, Elizabeth A.

    2014-01-01

    Methane emissions from the sea floor affect methane inputs into the atmosphere, ocean acidification and de-oxygenation, the distribution of chemosynthetic communities and energy resources. Global methane flux from seabed cold seeps has only been estimated for continental shelves, at 8 to 65 Tg CH4 yr−1, yet other parts of marine continental margins are also emitting methane. The US Atlantic margin has not been considered an area of widespread seepage, with only three methane seeps recognized seaward of the shelf break. However, massive upper-slope seepage related to gas hydrate degradation has been predicted for the southern part of this margin, even though this process has previously only been recognized in the Arctic. Here we use multibeam water-column backscatter data that cover 94,000 km2 of sea floor to identify about 570 gas plumes at water depths between 50 and 1,700 m between Cape Hatteras and Georges Bank on the northern US Atlantic passive margin. About 440 seeps originate at water depths that bracket the updip limit for methane hydrate stability. Contemporary upper-slope seepage there may be triggered by ongoing warming of intermediate waters, but authigenic carbonates observed imply that emissions have continued for more than 1,000 years at some seeps. Extrapolating the upper-slope seep density on this margin to the global passive margin system, we suggest that tens of thousands of seeps could be discoverable.

  2. Technical Considerations for Red Marking Ink Use When Interpreting Specimen Radiographs: Case Report.

    PubMed

    Brice, Matthew E; Gossweiler, Marisa; Bennett, Larry

    2017-03-01

    Artifacts are universal across all imaging modalities, varying in their conspicuity and significance. In this report three patients with pathology-proven breast cancer who had densities masquerading as microcalcifications at the resection margins of the lumpectomy specimens, but had negative microscopic margins, will be discussed. It was determined that these pseudocalcifications were the result of ink precipitates from a commonly utilized tissue marking dye. This artifact was further evaluated and reproduced by utilizing a boneless chicken breast as a phantom. © RSNA, 2016.

  3. Aging ballistic Lévy walks

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Zorawik, Tomasz

    2017-02-01

    Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .

  4. Evidence for four- and three-wave interactions in solar type III radio emissions

    NASA Astrophysics Data System (ADS)

    Thejappa, G.; MacDowall, R. J.; Bergamo, M.

    2013-08-01

    The high time resolution observations obtained by the STEREO/WAVES experiment show that in the source regions of solar type III radio bursts, Langmuir waves often occur as intense localized wave packets with short durations of only few ms. One of these wave packets shows that it is a three-dimensional field structure with WLneTe ~ 10-3, where WL is the peak energy density, and ne and Te are the electron density and temperature, respectively. For this wave packet, the conditions of the oscillating two-stream instability (OTSI) and supersonic collapse are satisfied within the error range of determination of main parameters. The density cavity, observed during this wave packet indicates that its depth, width and temporal coincidence are consistent with those of a caviton, generated by the ponderomotive force of the collapsing wave packet. The spectrum of each of the parallel and perpendicular components of the wave packet contains a primary peak at fpe, two secondary peaks at fpe ± fS and a low-frequency enhancement below fS, which, as indicated by the frequency and wave number resonance conditions, and the fast Fourier transform (FFT)-based tricoherence spectral peak at (fpe, fpe, fpe + fS, fpe - fS), are coupled to each other by the OTSI type of four-wave interaction (fpe is the local electron plasma frequency and fS is the frequency of ion sound waves). In addition to the primary peak at fpe, each of these spectra also contains a peak at 2fpe, which as indicated by the frequency and wave number resonance conditions, and the wavelet-based bicoherence spectral peak at (fpe, fpe), appears to correspond to the second harmonic electromagnetic waves generated as a result of coalescence of oppositely propagating sidebands excited by the OTSI. Thus, these observations for the first time provide combined evidence that (1) the OTSI and related strong turbulence processes play a significant role in the stabilization of the electron beam, (2) the coalescence of the oppositely propagating up- and down-shifted daughter Langmuir waves excited by the OTSI probably is the emission mechanism of the second harmonic radiation, and (3) the Langmuir collapse follows the route of OTSI in some of the type III radio bursts.

  5. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  6. Benchmarks for detecting 'breakthroughs' in clinical trials: empirical assessment of the probability of large treatment effects using kernel density estimation.

    PubMed

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin

    2014-10-21

    To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. Marginal adaptation and retention of a glass-ionomer, resin-modified glass-ionomers and a polyacid-modified resin composite in cervical Class-V lesions.

    PubMed

    Gladys, S; Van Meerbeek, B; Lambrechts, P; Vanherle, G

    1998-07-01

    An 18-month follow-up clinical trial of one conventional glass-ionomer (HIFI Master Palette), three resin-modified glass-ionomers (Fuji II LC, Vitremer, 3M Exp. 155) and one polyacid-modified resin composite (Dyract) was conducted to evaluate their clinical effectiveness in Class-V cervical lesions. In addition, the interface between dentin and two resin-modified glass-ionomers and one polyacid-modified resin composite was examined by scanning electron microscopy (SEM). After evaluation of the restorations immediately following placement (baseline), all patients were subjected to a strict recall schedule with controls at 6, 12 and 18 months. The clinical effectiveness was recorded in terms of retention and marginal integrity, clinical microleakage, caries recurrence, and tooth vitality. A chi 2-test (p < 0.05) was used to test for significant differences between materials. In case of restoration loss or special defects, a replica was made to examine the surface texture and restoration margins by SEM. In vitro, the interface was examined by SEM after an argon-ion-beam etching technique was used to enhance surface relief and disclose interfacial substructures. Retention appeared to be good for all the materials tested. Marginal discrepancies were localized at the incisal enamel and/or the cervical dentin margin, except for the polyacid-modified resin composite that showed most of the defects at the incisal enamel margin. None of the systems could guarantee margins free of microleakage for a long time. In vitro, the type of dentin pre-treatment defines to a great extent the morphology of the resultant interface between dentin and the restorative material tested. In this clinical study, the retention rate of the tested materials was good and even excellent for some products. Perfect marginal adaptation deteriorated too fast. The marginal adaptation of the polyacid-modified resin composite at the enamel site would probably have been better by the use of selective enamel or total acid etching. Marginal sealing remains a problem. Future research should concentrate on improving the marginal adaptation and sealing capacities before a broader clinical use can be advocated.

  8. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  9. Inclusion of Radiation Environment Variability in Total Dose Hardness Assurance Methodology

    NASA Technical Reports Server (NTRS)

    Xapsos, M. A.; Stauffer, C.; Phan, A.; McClure, S. S.; Ladbury, R. L.; Pellish, J. A.; Campola, M. J.; LaBel, K. A.

    2016-01-01

    Variability of the space radiation environment is investigated with regard to parts categorization for total dose hardness assurance methods. It is shown that it can have a significant impact. A modified approach is developed that uses current environment models more consistently and replaces the radiation design margin concept with one of failure probability during a mission.

  10. Tectonic configuration of the western Arabian continental margin, southern Red Sea, Kingdom of Saudi Arabia

    USGS Publications Warehouse

    Bohannon, R.G.

    1987-01-01

    A tectonic reconstruction of pre-Red Sea Afro/Arabia suggests that the early rift was narrow with intense extension confined to an axial belt 20 to 40 km wide. Steep Moho slopes probably developed during rift formation as indicated by published gravity data, two published seismic interpretations and the surface geology.

  11. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1980-01-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  12. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Astrophysics Data System (ADS)

    Kastner, S. O.; Bhatia, A. K.

    1980-08-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  13. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  14. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, L; Soldner, A; Kirk, M

    Purpose: The beam range uncertainty presents a special challenge for proton therapy. Novel technologies currently under development offer strategies to reduce the range uncertainty [1,2]. This work quantifies the potential advantages that could be realized by such a reduction for dosimetrically challenging chordomas at the base of skull. Therapeutic improvement was assessed by evaluating tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP). Methods: Treatment plans were made for a modulated-scanned proton delivery technique using the Eclipse treatment planning system. The prescription dose was 7920 cGy to the CTV. Three different range uncertainty scenarios were considered: 5 mm (3.5%more » of the beam range + 1 mm, representing current clinical practice, “Curr”), 2 mm (1.3%), and 1 mm (0.7%). For each of 4 patients, 3 different PTVs were defined via uniform expansion of the CTV by the value of the range uncertainty. Tumor control probability (TCP) and normal tissue complication probabilities (NTCPs) for organs-at-risk (OARs) were calculated using the Lyman-Kutcher-Burman[3] formalism and published model parameters [ref Terahara[4], quantec S10, Burman Red Journal v21 pp 123]. Our plan optimization strategy was to achieve PTV close to prescription while maintaining OAR NTCP values at or better than the Curr plan. Results: The average TCP values for the 5, 2, and 1 mm range uncertainty scenarios are 51%, 55% and 65%. The improvement in TCP for patients was between 4 and 30%, depending primarily on the proximity of the GTV to OAR. The average NTCPs for the brainstem and cord were about 4% and 1%, respectively, for all target margins. Conclusion: For base of skull chordomas, reduced target margins can substantially increase the TCP without increasing the NTCP. This work demonstrates the potential significance of a reduction in the range uncertainty for proton beams.« less

  16. Evolution of the continental margin of southern Spain and the Alboran Sea

    USGS Publications Warehouse

    Dillon, William P.; Robb, James M.; Greene, H. Gary; Lucena, Juan Carlos

    1980-01-01

    Seismic reflection profiles and magnetic intensity measurements were collected across the southern continental margin of Spain and the Alboran basin between Spain and Africa. Correlation of the distinct seismic stratigraphy observed in the profiles to stratigraphic information obtained from cores at Deep Sea Drilling Project site 121 allows effective dating of tectonic events. The Alboran Sea basin occupies a zone of motion between the African and Iberian lithospheric plates that probably began to form by extension in late Miocene time (Tortonian). At the end of Miocene time (end of Messinian) profiles show that an angular unconformity was cut, and then the strata were block faulted before subsequent deposition. The erosion of the unconformity probably resulted from lowering of Mediterranean sea level by evaporation when the previous channel between the Mediterranean and Atlantic was closed. Continued extension probably caused the block faulting and, eventually the opening of the present channel to the Atlantic through the Strait of Gibraltar and the reflooding of the Mediterranean. Minor tectonic movements at the end of Calabrian time (early Pleistocene) apparently resulted in minor faulting, extensive transgression in southeastern Spain, and major changes in the sedimentary environment of the Alboran basin. Active faulting observed at five locations on seismic profiles seems to form a NNE zone of transcurrent movement across the Alboran Sea. This inferred fault trend is coincident with some bathymetric, magnetic and seismicity trends and colinear with active faults that have been mapped on-shore in Morocco and Spain. The faults were probably caused by stresses related to plate movements, and their direction was modified by inherited fractures in the lithosphere that floors the Alboran Sea.

  17. Laboratory-Tutorial Activities for Teaching Probability

    ERIC Educational Resources Information Center

    Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.

    2006-01-01

    We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…

  18. Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA

    USGS Publications Warehouse

    Yarra, Allyson N.; Magoulick, Daniel D.

    2018-01-01

    Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.

  19. Pleural pressure theory revisited: a role for capillary equilibrium.

    PubMed

    Casha, Aaron R; Caruana-Gauci, Roberto; Manche, Alexander; Gauci, Marilyn; Chetcuti, Stanley; Bertolaccini, Luca; Scarci, Marco

    2017-04-01

    Theories elucidating pleural pressures should explain all observations including the equal and opposite recoil of the chest wall and lungs, the less than expected pleural hydrostatic gradient and its variation at lobar margins, why pleural pressures are negative and how pleural fluid circulation functions. A theoretical model describing equilibrium between buoyancy, hydrostatic forces, and capillary forces is proposed. The capillary equilibrium model described depends on control of pleural fluid volume and protein content, powered by an active pleural pump. The interaction between buoyancy forces, hydrostatic pressure and capillary pressure was calculated, and values for pleural thickness and pressure were determined using values for surface tension, contact angle, pleural fluid and lung densities found in the literature. Modelling can explain the issue of the differing hydrostatic vertical pleural pressure gradient at the lobar margins for buoyancy forces between the pleural fluid and the lung floating in the pleural fluid according to Archimedes' hydrostatic paradox. The capillary equilibrium model satisfies all salient requirements for a pleural pressure model, with negative pressures maximal at the apex, equal and opposite forces in the lung and chest wall, and circulatory pump action. This model predicts that pleural effusions cannot occur in emphysema unless concomitant heart failure increases lung density. This model also explains how the non-confluence of the lung with the chest wall (e.g., lobar margins) makes the pleural pressure more negative, and why pleural pressures would be higher after an upper lobectomy compared to a lower lobectomy. Pathological changes in pleural fluid composition and lung density alter the equilibrium between capillarity and buoyancy hydrostatic pressure to promote pleural effusion formation.

  20. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  1. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  2. Committor of elementary reactions on multistate systems

    NASA Astrophysics Data System (ADS)

    Király, Péter; Kiss, Dóra Judit; Tóth, Gergely

    2018-04-01

    In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.

  3. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  4. Using areas of known occupancy to identify sources of variation in detection probability of raptors: taking time lowers replication effort for surveys.

    PubMed

    Murn, Campbell; Holloway, Graham J

    2016-10-01

    Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.

  5. Hydrologic risk analysis in the Yangtze River basin through coupling Gaussian mixtures into copulas

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, W. W.; Huang, G. H.; Li, Y. P.; Huang, K.; Li, Z.

    2016-02-01

    In this study, a bivariate hydrologic risk framework is proposed through coupling Gaussian mixtures into copulas, leading to a coupled GMM-copula method. In the coupled GMM-Copula method, the marginal distributions of flood peak, volume and duration are quantified through Gaussian mixture models and the joint probability distributions of flood peak-volume, peak-duration and volume-duration are established through copulas. The bivariate hydrologic risk is then derived based on the joint return period of flood variable pairs. The proposed method is applied to the risk analysis for the Yichang station on the main stream of the Yangtze River, China. The results indicate that (i) the bivariate risk for flood peak-volume would keep constant for the flood volume less than 1.0 × 105 m3/s day, but present a significant decreasing trend for the flood volume larger than 1.7 × 105 m3/s day; and (ii) the bivariate risk for flood peak-duration would not change significantly for the flood duration less than 8 days, and then decrease significantly as duration value become larger. The probability density functions (pdfs) of the flood volume and duration conditional on flood peak can also be generated through the fitted copulas. The results indicate that the conditional pdfs of flood volume and duration follow bimodal distributions, with the occurrence frequency of the first vertex decreasing and the latter one increasing as the increase of flood peak. The obtained conclusions from the bivariate hydrologic analysis can provide decision support for flood control and mitigation.

  6. Evolved dispersal strategies at range margins

    PubMed Central

    Dytham, Calvin

    2009-01-01

    Dispersal is a key component of a species's ecology and will be under different selection pressures in different parts of the range. For example, a long-distance dispersal strategy suitable for continuous habitat at the range core might not be favoured at the margin, where the habitat is sparse. Using a spatially explicit, individual-based, evolutionary simulation model, the dispersal strategies of an organism that has only one dispersal event in its lifetime, such as a plant or sessile animal, are considered. Within the model, removing habitat, increasing habitat turnover, increasing the cost of dispersal, reducing habitat quality or altering vital rates imposes range limits. In most cases, there is a clear change in the dispersal strategies across the range, although increasing death rate towards the margin has little impact on evolved dispersal strategy across the range. Habitat turnover, reduced birth rate and reduced habitat quality all increase evolved dispersal distances at the margin, while increased cost of dispersal and reduced habitat density lead to lower evolved dispersal distances at the margins. As climate change shifts suitable habitat poleward, species ranges will also start to shift, and it will be the dispersal capabilities of marginal populations, rather than core populations, that will influence the rate of range shifting. PMID:19324810

  7. End-anchored polymers in good solvents from the single chain limit to high anchoring densities.

    PubMed

    Whitmore, Mark D; Grest, Gary S; Douglas, Jack F; Kent, Michael S; Suo, Tongchuan

    2016-11-07

    An increasing number of applications utilize grafted polymer layers to alter the interfacial properties of solid substrates, motivating refinement in our theoretical understanding of such layers. To assess existing theoretical models of them, we have investigated end-anchored polymer layers over a wide range of grafting densities, σ, ranging from a single chain to high anchoring density limits, chain lengths ranging over two orders of magnitude, for very good and marginally good solvent conditions. We compare Monte Carlo and molecular dynamics simulations, numerical self-consistent field calculations, and experimental measurements of the average layer thickness, h, with renormalization group theory, the Alexander-de Gennes mushroom theory, and the classical brush theory. Our simulations clearly indicate that appreciable inter-chain interactions exist at all simulated areal anchoring densities so that there is no mushroom regime in which the layer thickness is independent of σ. Moreover, we find that there is no high coverage regime in which h follows the predicted scaling, h ∼ Nσ 1/3 , for classical polymer brushes either. Given that no completely adequate analytic theory seems to exist that spans wide ranges of N and σ, we applied scaling arguments for h as a function of a suitably defined reduced anchoring density, defined in terms of the solution radius of gyration of the polymer chains and N. We find that such a scaling approach enables a smooth, unified description of h in very good solvents over the full range of anchoring density and chain lengths, although this type of data reduction does not apply to marginal solvent quality conditions.

  8. Correlations between pathologic subtypes/immunohistochemical implication and CT characteristics of lung adenocarcinoma ≤ 1 cm with ground-glass opacity.

    PubMed

    Wu, Fang; Cai, Zu-long; Tian, Shu-ping; Jin, Xin; Jing, Rui; Yang, Yue-qing; Li, Ying-na; Zhao, Shao-hong

    2015-04-01

    To discuss the correlation of pathologic subtypes and immunohistochemical implication with CT features of lung adenocarcinoma 1 cm or less in diameter with focal ground-glass opacity (fGGO). CT appearances of 59 patients who underwent curative resection of lung adenocarcinoma ≤ 1 cm with fGGO were analyzed in terms of lesion location, size, density, shape (round, oval, polygonal, irregular), margin (smooth, lobular, spiculated, lobular and spiculated), bubble-like sign, air bronchogram, pleural tag, and tumor-lung interface. Histopathologic subtypes were classified according to International Association for the Study of Lung Cancer/ American Thoracic Society/European Respiratory Society classification of lung adenocarcinoma. Common molecular markers in immunohistochemical study included human epidermal growth factor receptor (HER)-1,HER-2,Ki-67, vascular endothelial growth factor (VEGF) and DNA topoisomerase 2Α.Patients' age and lesions' size and density were compared with pathologic subtypes using analysis of variance or nonparametric Wilcoxon tests. Patients' gender, lesion location, shape and margin, bubble-like sign, air bronchogram, pleural tag, and tumor-lung interface were compared with histopathologic subtypes and immunohistochemical implication using ψ² test or Fisher's exact test. The patients' gender, age, lesion location, shape, air bronchogram, pleural tag, and tumor-lung interface were not significantly different among different histopathologic subtypes (P=0.194, 0.126, 0.609, 0.678, 0.091, 0.374, and 0.339, respectively), whereas the lesion size,density,bubble-like sign, and margin showed significant differences (P=0.028, 0.002, 0.003, 0.046, respectively). The expression of Ki-67 significantly differed among nodules with different shapes(P=0.015). Statistically significant difference also existed between tumor-lung interface and HER-1 expression (P=0.019) and between bubble sign and HER-2 expression (P=0.049). Of lung adenocarcinoma ≤ 1 cm with fGGO,bubble-like sign occurs more frequently in invasive pulmonary adenocarcinoma and less frequently in atypical adenomatous hyperplasia. In addition, preinvasive lesions (atypical adenomatous hyperplasia and adenocarcinoma in situ) more frequently demonstrates smooth margin,while invasive lesions (minimally invasive adenocarcinoma and invasive pulmonary adenocarcinoma) more frequently demonstrates lobular and spiculated margin. Some CT features are associated with immunohistochemical implication of lung adenocarcinoma ≤ 1 cm with fGGO.

  9. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  10. Polychaete community structure in the South Eastern Arabian Sea continental margin (200-1000 m)

    NASA Astrophysics Data System (ADS)

    Abdul Jaleel, K. U.; Anil Kumar, P. R.; Nousher Khan, K.; Correya, Neil S.; Jacob, Jini; Philip, Rosamma; Sanjeevan, V. N.; Damodaran, R.

    2014-11-01

    Macrofaunal polychaete communities (>500 μm) in the South Eastern Arabian Sea (SEAS) continental margin (200-1000 m) are described, based on three systematic surveys carried out in 9 transects (at ~200 m, 500 m and 1000 m) between 7°00‧and 14°30‧N latitudes. A total of 7938 polychaetes belonging to 195 species were obtained in 136 grab samples collected at 27 sites. Three distinct assemblages were identified in the northern part of the SEAS margin (10-14°30‧N), occupying the three sampled depth strata (shelf edge, upper and mid-slope) and two assemblages (shelf edge and slope) in the south (7-10°N). Highest density of polychaetes and dominance of a few species were observed in the shelf edge, where the Arabian Sea oxygen minimum zone (OMZ) impinged on the seafloor, particularly in the northern transects. The resident fauna in this region (Cossura coasta, Paraonis gracilis, Prionospio spp. and Tharyx spp.) were characteristically of smaller size, and well suited to thrive in the sandy sediments in OMZ settings. Densities were lowest along the most northerly transect (T9), where dissolved oxygen (DO) concentrations were extremely low (<0.15 ml l-1, i.e.<6.7 μmol l-1). Beyond the realm of influence of the OMZ (i.e. mid-slope, ~1000 m), the faunal density decreased while species diversity increased. The relative proportion of silt increased with depth, and the dominance of the aforementioned species decreased, giving way to forms such as Paraprionospio pinnata, Notomastus sp., Eunoe sp. and lumbrinerids. Relatively high species richness and diversity were observed in the sandy sediments of the southern sector (7-9°N), where influence of the OMZ was less intense. The area was also characterized by certain species (e.g. Aionidella cirrobranchiata, Isolda pulchella) that were nearly absent in the northern region. The gradients in DO concentration across the core and lower boundary of the OMZ, along with bathymetric and latitudinal variation in sediment texture, were responsible for differences in polychaete size and community structure on the SEAS margin. Spatial and temporal variations were observed in organic matter (OM) content of the sediment, but these were not reflected in the density, diversity or distribution pattern of the polychaetes.

  11. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  12. Chapter 48: Geology and petroleum potential of the Eurasia Basin

    USGS Publications Warehouse

    Moore, Thomas E.; Pitman, Janet K.

    2011-01-01

    The Eurasia Basin petroleum province comprises the younger, eastern half of the Arctic Ocean, including the Cenozoic Eurasia Basin and the outboard part of the continental margin of northern Europe. For the USGS petroleum assessment (CARA), it was divided into four assessment units (AUs): the Lena Prodelta AU, consisting of the deep-marine part of the Lena Delta; the Nansen Basin Margin AU, comprising the passive margin sequence of the Eurasian plate; and the Amundsen Basin and Nansen Basin AUs which encompass the abyssal plains north and south of the Gakkel Ridge spreading centre, respectively. The primary petroleum system thought to be present is sourced in c. 50–44 Ma (Early to Middle Eocene) condensed pelagic deposits that could be widespread in the province. Mean estimates of undiscovered, technically recoverable petroleum resources include <1 billion barrels of oil (BBO) and about 1.4 trillion cubic feet (TCF) of nonassociated gas in Lena Prodelta AU, and <0.4 BBO and 3.4 TCF nonassociated gas in the Nansen Basin Margin AU. The Nansen Basin and Amundsen Basin AUs were not quantitatively assessed because they have less than 10% probability of containing at least one accumulation of 50 MMBOE (million barrels of oil equivalent).

  13. Hydrocarbon gas seeps of the convergent Hikurangi margin, North Island, New Zealand

    USGS Publications Warehouse

    Kvenvolden, K.A.; Pettinga, J.R.

    1989-01-01

    Two hydrocarbon gas seeps, located about 13 km apart, have distinctive molecular and isotopic compositions. These seeps occur within separate tectonic melange units of narrow parallel trending and structurally complex zones with incorporated upper Cretaceous and Palaeogene passive continental margin deposits which are now compressively deformed and imbricated along the convergent Hikurangi margin of North Island, New Zealand. At Brookby Station within the Coastal High, the seeping hydrocarbon gas has a methane/ethane ratio of 48 and ??13C and ??D values of methane of -45.7 and -188???, respectively (relative to the PDB and SMOW standards). Within the complex core of the Elsthorpe Anticline at Campbell Station seep, gas has a methane/ethane ratio of about 12000, and the methane has ??13C and ??D values of -37.4 and -170???, respectively. The source of the gases cannot be positively identified, but the gases probably originate from the thermal decomposition of organic matter in tectonically disturbed upper Cretaceous and/or lower Tertiary sedimentary rocks of passive margin affinity and reach the surface by migration along thrust faults associated with tectonic melange. The geochemical differences between the two gases may result from differences in burial depths of similar source sediment. ?? 1989.

  14. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  15. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  16. The Havriliak-Negami relaxation and its relatives: the response, relaxation and probability density functions

    NASA Astrophysics Data System (ADS)

    Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.

    2018-04-01

    We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.

  17. Turbidite event history--Methods and implications for Holocene paleoseismicity of the Cascadia subduction zone

    USGS Publications Warehouse

    Goldfinger, Chris; Nelson, C. Hans; Morey, Ann E.; Johnson, Joel E.; Patton, Jason R.; Karabanov, Eugene B.; Gutierrez-Pastor, Julia; Eriksson, Andrew T.; Gracia, Eulalia; Dunhill, Gita; Enkin, Randolph J.; Dallimore, Audrey; Vallier, Tracy; Kayen, Robert; Kayen, Robert

    2012-01-01

    Turbidite systems along the continental margin of Cascadia Basin from Vancouver Island, Canada, to Cape Mendocino, California, United States, have been investigated with swath bathymetry; newly collected and archive piston, gravity, kasten, and box cores; and accelerator mass spectrometry radiocarbon dates. The purpose of this study is to test the applicability of the Holocene turbidite record as a paleoseismic record for the Cascadia subduction zone. The Cascadia Basin is an ideal place to develop a turbidite paleoseismologic method and to record paleoearthquakes because (1) a single subduction-zone fault underlies the Cascadia submarine-canyon systems; (2) multiple tributary canyons and a variety of turbidite systems and sedimentary sources exist to use in tests of synchronous turbidite triggering; (3) the Cascadia trench is completely sediment filled, allowing channel systems to trend seaward across the abyssal plain, rather than merging in the trench; (4) the continental shelf is wide, favoring disconnection of Holocene river systems from their largely Pleistocene canyons; and (5) excellent stratigraphic datums, including the Mazama ash and distinguishable sedimentological and faunal changes near the Pleistocene-Holocene boundary, are present for correlating events and anchoring the temporal framework. Multiple tributaries to Cascadia Channel with 50- to 150-km spacing, and a wide variety of other turbidite systems with different sedimentary sources contain 13 post-Mazama-ash and 19 Holocene turbidites. Likely correlative sequences are found in Cascadia Channel, Juan de Fuca Channel off Washington, and Hydrate Ridge slope basin and Astoria Fan off northern and central Oregon. A probable correlative sequence of turbidites is also found in cores on Rogue Apron off southern Oregon. The Hydrate Ridge and Rogue Apron cores also include 12-22 interspersed thinner turbidite beds respectively. We use 14C dates, relative-dating tests at channel confluences, and stratigraphic correlation of turbidites to determine whether turbidites deposited in separate channel systems are correlative - triggered by a common event. In most cases, these tests can separate earthquake-triggered turbidity currents from other possible sources. The 10,000-year turbidite record along the Cascadia margin passes several tests for synchronous triggering and correlates well with the shorter onshore paleoseismic record. The synchroneity of a 10,000-year turbidite-event record for 500 km along the northern half of the Cascadia subduction zone is best explained by paleoseismic triggering by great earthquakes. Similarly, we find a likely synchronous record in southern Cascadia, including correlated additional events along the southern margin. We examine the applicability of other regional triggers, such as storm waves, storm surges, hyperpycnal flows, and teletsunami, specifically for the Cascadia margin. The average age of the oldest turbidite emplacement event in the 10-0-ka series is 9,800±~210 cal yr B.P. and the youngest is 270±~120 cal yr B.P., indistinguishable from the A.D. 1700 (250 cal yr B.P.) Cascadia earthquake. The northern events define a great earthquake recurrence of ~500-530 years. The recurrence times and averages are supported by the thickness of hemipelagic sediment deposited between turbidite beds. The southern Oregon and northern California margins represent at least three segments that include all of the northern ruptures, as well as ~22 thinner turbidites of restricted latitude range that are correlated between multiple sites. At least two northern California sites, Trinidad and Eel Canyon/pools, record additional turbidites, which may be a mix of earthquake and sedimentologically or storm-triggered events, particularly during the early Holocene when a close connection existed between these canyons and associated river systems. The combined stratigraphic correlations, hemipelagic analysis, and 14C framework suggest that the Cascadia margin has three rupture modes: (1) 19-20 full-length or nearly full length ruptures; (2) three or four ruptures comprising the southern 50-70 percent of the margin; and (3) 18-20 smaller southern-margin ruptures during the past 10 k.y., with the possibility of additional southern-margin events that are presently uncorrelated. The shorter rupture extents and thinner turbidites of the southern margin correspond well with spatial extents interpreted from the limited onshore paleoseismic record, supporting margin segmentation of southern Cascadia. The sequence of 41 events defines an average recurrence period for the southern Cascadia margin of ~240 years during the past 10 k.y. Time-independent probabilities for segmented ruptures range from 7-12 percent in 50 years for full or nearly full margin ruptures to ~21 percent in 50 years for a southern-margin rupture. Time-dependent probabilities are similar for northern margin events at ~7-12 percent and 37-42 percent in 50 years for the southern margin. Failure analysis suggests that by the year 2060, Cascadia will have exceeded ~27 percent of Holocene recurrence intervals for the northern margin and 85 percent of recurrence intervals for the southern margin. The long earthquake record established in Cascadia allows tests of recurrence models rarely possible elsewhere. Turbidite mass per event along the Cascadia margin reveals a consistent record for many of the Cascadia turbidites. We infer that larger turbidites likely represent larger earthquakes. Mass per event and magnitude estimates also correlate modestly with following time intervals for each event, suggesting that Cascadia full or nearly full margin ruptures weakly support a time-predictable model of recurrence. The long paleoseismic record also suggests a pattern of clustered earthquakes that includes four or five cycles of two to five earthquakes during the past 10 k.y., separated by unusually long intervals. We suggest that the pattern of long time intervals and longer ruptures for the northern and central margins may be a function of high sediment supply on the incoming plate, smoothing asperities, and potential barriers. The smaller southern Cascadia segments correspond to thinner incoming sediment sections and potentially greater interaction between lower-plate and upper-plate heterogeneities. The Cascadia Basin turbidite record establishes new paleoseismic techniques utilizing marine turbidite-event stratigraphy during sea-level highstands. These techniques can be applied in other specific settings worldwide, where an extensive fault traverses a continental margin that has several active turbidite systems.

  18. The Spartan 1 Mission

    DTIC Science & Technology

    1989-07-11

    this dark matter to be mea- sured. The special feature of the Spartan 1 instrument has been its ability to measure the density and temperature of the...required to create the potential well, because it exceeds by a large margin the mass we can account for as galaxies and gas. Some invisible (" dark ...34) matter of unknown origin pervades the cluster. Measurements of the radial density and temperature gradients in the hot gas allow the distribution of

  19. Initiation of Extension in South China Continental Margin during the Active-Passive Margin Transition: Thermochronological and Kinematic Constraints

    NASA Astrophysics Data System (ADS)

    Zuo, X.; Chan, L. S.

    2015-12-01

    The South China continental margin is characterized by a widespread magmatic belt, prominent NE-striking faults and numerous rifted basins filled by Cretaceous-Eocene sediments. The geology denotes a transition from active to passive margin, which led to rapid modifications of crustal stress configuration and reactivation of older faults in this area. Our zircon fission-track data in this region show two episodes of exhumation: The first episode, occurring during 170-120Ma, affected local parts of the Nanling Range. The second episode, a more regional exhumation event, occurred during 115-70Ma, including the Yunkai Terrane and the Nanling Range. Numerical geodynamic modeling was conducted to simulate the subduction between the paleo-Pacific plate and the South China Block. The modeling results could explain the fact that exhumation of the granite-dominant Nanling Range occurred earlier than that of the gneiss-dominant Yunkai Terrane. In addition to the difference in rock types, the heat from Jurassic-Early Cretaceous magmatism in Nanling may have softened the upper crust, causing the area to exhume more readily than Yunkai. Numerical modeling results also indicate that (1) high lithospheric geothermal gradient, high slab dip angle and low convergence velocity favor the reversal of crustal stress state from compression to extension in the upper continental plate; (2) late Mesozoic magmatism in South China was probably caused by a slab roll-back; and (3) crustal extension could have occurred prior to the cessation of plate subduction. The inversion of stress regime in the continental crust from compression to crustal extension imply that the Late Cretaceous-early Paleogene red-bed basins in South China could have formed during the late stage of the subduction, accounting for the occurrence of volcanic events in some sedimentary basins. We propose that the rifting started as early as Late Cretaceous, probably before the cessation of subduction process.

  20. Mortality among residents of shelters, rooming houses, and hotels in Canada: 11 year follow-up study.

    PubMed

    Hwang, Stephen W; Wilkins, Russell; Tjepkema, Michael; O'Campo, Patricia J; Dunn, James R

    2009-10-26

    To examine mortality in a representative nationwide sample of homeless and marginally housed people living in shelters, rooming houses, and hotels. Follow-up study. Canada 1991-2001. 15 100 homeless and marginally housed people enumerated in 1991 census. Age specific and age standardised mortality rates, remaining life expectancies at age 25, and probabilities of survival from age 25 to 75. Data were compared with data from the poorest and richest income fifths as well as with data for the entire cohort Of the homeless and marginally housed people, 3280 died. Mortality rates among these people were substantially higher than rates in the poorest income fifth, with the highest rate ratios seen at younger ages. Among those who were homeless or marginally housed, the probability of survival to age 75 was 32% (95% confidence interval 30% to 34%) in men and 60% (56% to 63%) in women. Remaining life expectancy at age 25 was 42 years (42 to 43) and 52 years (50 to 53), respectively. Compared with the entire cohort, mortality rate ratios for men and women, respectively, were 11.5 (8.8 to 15.0) and 9.2 (5.5 to 15.2) for drug related deaths, 6.4 (5.3 to 7.7) and 8.2 (5.0 to 13.4) for alcohol related deaths, 4.8 (3.9 to 5.9) and 3.8 (2.7 to 5.4) for mental disorders, and 2.3 (1.8 to 3.1) and 5.6 (3.2 to 9.6) for suicide. For both sexes, the largest differences in mortality rates were for smoking related diseases, ischaemic heart disease, and respiratory diseases. Living in shelters, rooming houses, and hotels is associated with much higher mortality than expected on the basis of low income alone. Reducing the excessively high rates of premature mortality in this population would require interventions to address deaths related to smoking, alcohol, and drugs, and mental disorders and suicide, among other causes.

  1. Living with marginal coral communities: Diversity and host-specificity in coral-associated barnacles in the northern coral distribution limit of the East China Sea.

    PubMed

    Chan, Benny K K; Xu, Guang; Kim, Hyun Kyong; Park, Jin-Ho; Kim, Won

    2018-01-01

    Corals and their associated fauna are extremely diverse in tropical waters and form major reefs. In the high-latitude temperate zone, corals living near their distribution limit are considered marginal communities because they are particularly extremely sensitive to environmental and climatic changes. In this study, we examined the diversity and host usage of coral-associated barnacles on Jeju Island, Korea, the northern coral distribution limit in the East China Sea. In this study, only three coral-associated barnacles-from two genera in two subfamilies-were collected. The Pyrgomatinid barnacles Cantellius arcuatus and Cantellius cf. euspinulosum were found only on the corals Montipora millepora and Alveopora japonica, respectively. The Megatrematinid barnacle Pyrgomina oulastreae, relatively a generalist, was found on Psammocora spp. (both profundacella and albopicta) and Oulastrea crispata corals. The host usage of these three barnacles does not overlap. DNA barcode sequences of the C. arcuatus specimens collected in the present study matched those collected in Kochi in Japan, Taiwan, Malaysia and Papua New Guinea, suggesting that this species has a wide geographical distribution. C. arcuatus covers a wider host range in Taiwan waters, inhabiting Montipora spp. and Porites spp., which suggests that the host specificity of coral-associated barnacles varies with host availability. C. cf. euspinulosum probably has a very narrow distribution and host usage. The sequences of C. cf. euspinulosum on Jeju Island do not match those of any known sequences of Cantellius barnacles in the Indo-Pacific region. P. oulastreae probably prefers cold water because it has been reported in temperate regions. Coral-associated barnacles in marginal communities have considerably lower diversity than their subtropical and tropical counterparts. When host availability is limited, marginal coral-associated barnacles exhibit higher host specificity than those in subtropical and tropical reef systems.

  2. Crustal geometry of the northeastern Gulf of Aden passive margin: localization of the deformation inferred from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Tiberi, C.; Leroy, S.; d'Acremont, E.; Bellahsen, N.; Ebinger, C.; Al-Lazki, A.; Pointu, A.

    2007-03-01

    Here we use receiver function analysis to retrieve crustal thickness and crustal composition along the 35-My-old passive margin of the eastern Gulf of Aden. Our aims are to use results from the 3-D seismic array to map crustal stretching across and along the Aden margin in southern Oman. The array recorded local and teleseismic events between 2003 March and 2004 March. Seventy-eight events were used in our joint inversions for Vp/Vs ratio and depth. The major results are: (1) Crustal thickness decreases from the uplifted rift flank of the margin towards the Sheba mid-ocean ridge. We found a crustal thickness of about 35 km beneath the northern rift flank. This value decreases sharply to 26 km beneath the post-rift subsidence zone on the Salalah coastal plain. This 10 km of crustal thinning occurs across a horizontal distance of less than 30 km showing a localization of the crustal thinning below the first known rifted block of the margin. (2) A second rift margin transect located about 50 km to the east shows no thinning from the coast to 50 km onshore. The lack of crustal thickness variation indicates that the maximum crustal stretching could be restricted to offshore regions. (3) The along-strike variations in crustal structure demonstrate the scale and longevity of the regular along-axis rift segmentation. (4) Extension is still observed north of the rifted domain, 70 km onshore from the coast, making the width of margin larger than first expected from geology. (5) The crust has a felsic to normal composition with a probably strong effect of the sedimentary layer on the Vp/Vs ratio (comprised between 1.67 and 1.91).

  3. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  4. Quantitative distribution of metazoan meiofauna in continental margin sediments of the Skagerrak (Northeastern North Sea)

    NASA Astrophysics Data System (ADS)

    De Bovée, F.; Hall, P. O. J.; Hulth, S.; Hulthe, G.; Landén, A.; Tengberg, A.

    1996-02-01

    A quantitative survey of metazoan meiofauna in continental-margin sediments of the Skagerrak was carried out using virtually undisturbed sediment samples collected with a multiple corer. Altogether 11 stations distributed along and across the Norwegian Trench were occupied during three cruises. Abundance ranged from 155 to 6846 ind·10 cm -2 and revealed a sharply decreasing trend with increasing water depth. The densities were high on the upper part of the Danish margin (6846 ind·10 cm -2 at 194 m depth) and low in the central part of the deep Skagerrak (155 ind·10 cm -2 at 637 m depth). Also body lengths were significantly shorter on the Danish margin then elsewhere in the Skagerrak, indicating a greater importance of juveniles in this area. We suggest that the high densities may be explained by a stimulated renewal of the fauna, possibly induced by an adequate food supply. The low abundances found in sediments from the deepest part of the Norwegian Trench cannot be attributed to any lack of oxygen. We suggest that the low meiofaunal abundances are caused by a decrease in the food supply (accentuated in this area by lower sedimentation rates) and/or by the very high concentrations of dissolved manganese in the pore water of these sediments. The metazoan meiofauna was largely dominated by nematodes. Comparison of the respiration rates of the nematode population with the total benthic respiration (0.5 to 14%) suggests that the relative importance of metazoan meiofauna decreased with water depth.

  5. Quantitative Segmentation of Fluorescence Microscopy Images of Heterogeneous Tissue: Application to the Detection of Residual Disease in Tumor Margins

    PubMed Central

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-01-01

    Purpose To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features. Materials and Methods Tissue excised from a genetically engineered mouse model of sarcoma was imaged using a subcellular resolution microendoscope after topical application of a fluorescent anatomical contrast agent: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma. Results Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach. Conclusion The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue. PMID:23824589

  6. Quantitative Segmentation of Fluorescence Microscopy Images of Heterogeneous Tissue: Application to the Detection of Residual Disease in Tumor Margins.

    PubMed

    Mueller, Jenna L; Harmany, Zachary T; Mito, Jeffrey K; Kennedy, Stephanie A; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G; Willett, Rebecca M; Brown, J Quincy; Ramanujam, Nimmi

    2013-01-01

    To develop a robust tool for quantitative in situ pathology that allows visualization of heterogeneous tissue morphology and segmentation and quantification of image features. TISSUE EXCISED FROM A GENETICALLY ENGINEERED MOUSE MODEL OF SARCOMA WAS IMAGED USING A SUBCELLULAR RESOLUTION MICROENDOSCOPE AFTER TOPICAL APPLICATION OF A FLUORESCENT ANATOMICAL CONTRAST AGENT: acriflavine. An algorithm based on sparse component analysis (SCA) and the circle transform (CT) was developed for image segmentation and quantification of distinct tissue types. The accuracy of our approach was quantified through simulations of tumor and muscle images. Specifically, tumor, muscle, and tumor+muscle tissue images were simulated because these tissue types were most commonly observed in sarcoma margins. Simulations were based on tissue characteristics observed in pathology slides. The potential clinical utility of our approach was evaluated by imaging excised margins and the tumor bed in a cohort of mice after surgical resection of sarcoma. Simulation experiments revealed that SCA+CT achieved the lowest errors for larger nuclear sizes and for higher contrast ratios (nuclei intensity/background intensity). For imaging of tumor margins, SCA+CT effectively isolated nuclei from tumor, muscle, adipose, and tumor+muscle tissue types. Differences in density were correctly identified with SCA+CT in a cohort of ex vivo and in vivo images, thus illustrating the diagnostic potential of our approach. The combination of a subcellular-resolution microendoscope, acriflavine staining, and SCA+CT can be used to accurately isolate nuclei and quantify their density in anatomical images of heterogeneous tissue.

  7. Tracking of plus-ends reveals microtubule functional diversity in different cell types

    NASA Astrophysics Data System (ADS)

    Shaebani, M. Reza; Pasula, Aravind; Ott, Albrecht; Santen, Ludger

    2016-07-01

    Many cellular processes are tightly connected to the dynamics of microtubules (MTs). While in neuronal axons MTs mainly regulate intracellular trafficking, they participate in cytoskeleton reorganization in many other eukaryotic cells, enabling the cell to efficiently adapt to changes in the environment. We show that the functional differences of MTs in different cell types and regions is reflected in the dynamic properties of MT tips. Using plus-end tracking proteins EB1 to monitor growing MT plus-ends, we show that MT dynamics and life cycle in axons of human neurons significantly differ from that of fibroblast cells. The density of plus-ends, as well as the rescue and catastrophe frequencies increase while the growth rate decreases toward the fibroblast cell margin. This results in a rather stable filamentous network structure and maintains the connection between nucleus and membrane. In contrast, plus-ends are uniformly distributed along the axons and exhibit diverse polymerization run times and spatially homogeneous rescue and catastrophe frequencies, leading to MT segments of various lengths. The probability distributions of the excursion length of polymerization and the MT length both follow nearly exponential tails, in agreement with the analytical predictions of a two-state model of MT dynamics.

  8. Hydrogeologic implications of increased septic-tank-soil-absorption system density, Ogden Valley, Weber County, Utah

    USGS Publications Warehouse

    Lowe, Mike; Miner, Michael L.; ,

    1990-01-01

    Ground water in Ogden Valley occurs in perched, confined, and unconfined aquifers in the valley fill to depths of 600 feet and more. The confined aquifer, which underlies only the western portion of the valley, is overlain by cleyey silt lacustrine sediments probably deposited during the Bonneville Basin's Little Valley lake cycle sometime between 90,000 and 150,000 years ago. The top of this cleyey silt confining layer is generally 25 to 60 feet below the ground surface. Unconfined conditions occur above and beyond the outer margin of the confining layer. The sediments overlying the confining layer are primarily Lake Bonneville deposits. Water samples from springs, streams, and wells around Pineview Reservoir, and from the reservoir itself, were collected and analyzed. These samples indicate that water quality in Ogden Valley is presently good. Average nitrate concentrations in the shallow unconfined aquifer increase toward the center of Ogden Valley. This trend was not observed in the confined aquifer. There is no evidence, however, of significant water-quality deterioration, even in the vicinity of Huntsville, a town that has been densely developed using septic-tank-soil-absorption systems for much of the time since it was founded in 1860.

  9. Joint time/frequency-domain inversion of reflection data for seabed geoacoustic profiles and uncertainties.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2008-03-01

    This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.

  10. Radiative transition of hydrogen-like ions in quantum plasma

    NASA Astrophysics Data System (ADS)

    Hu, Hongwei; Chen, Zhanbin; Chen, Wencong

    2016-12-01

    At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.

  11. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  12. Observations of seismicity and ground motion in the northeast U.S. Atlantic margin from ocean bottom seismometer data

    USGS Publications Warehouse

    Flores, Claudia; ten Brink, Uri S.; McGuire, Jeffrey J.; Collins, John A.

    2017-01-01

    Earthquake data from two short-period ocean-bottom seismometer (OBS) networks deployed for over a year on the continental slope off New York and southern New England were used to evaluate seismicity and ground motions along the continental margin. Our OBS networks located only one earthquake of Mc∼1.5 near the shelf edge during six months of recording, suggesting that seismic activity (MLg>3.0) of the margin as far as 150–200 km offshore is probably successfully monitored by land stations without the need for OBS deployments. The spectral acceleration from two local earthquakes recorded by the OBS was found to be generally similar to the acceleration from these earthquakes recorded at several seismic stations on land and to hybrid empirical acceleration relationships for eastern North America. Therefore, the seismic attenuation used for eastern North America can be extended in this region at least to the continental slope. However, additional offshore studies are needed to verify these preliminary conclusions.

  13. Assessment of tsunami hazard to the U.S. Atlantic margin

    USGS Publications Warehouse

    ten Brink, Uri S.; Chaytor, Jason; Geist, Eric L.; Brothers, Daniel S.; Andrews, Brian D.

    2014-01-01

    Tsunamis caused by atmospheric disturbances and by coastal earthquakes may be more frequent than those generated by landslides, but their amplitudes are probably smaller. Among the possible far-field earthquake sources, only earthquakes located within the Gulf of Cadiz or west of the Tore-Madeira Rise are likely to affect the U.S. coast. It is questionable whether earthquakes on the Puerto Rico Trench are capable of producing a large enough tsunami that will affect the U.S. Atlantic coast. More information is needed to evaluate the seismic potential of the northern Cuba fold-and-thrust belt. The hazard from a volcano flank collapse in the Canary Islands is likely smaller than originally stated, and there is not enough information to evaluate the magnitude and frequency of flank collapse from the Azores Islands. Both deterministic and probabilistic methods to evaluate the tsunami hazard from the margin are available for application to the Atlantic margin, but their implementation requires more information than is currently available.

  14. Geochemical consequences of flow differentiation in a multiple injection dike (Trinity ophiolite, N. California)

    USGS Publications Warehouse

    Brouxel, M.

    1991-01-01

    A clinopyroxene-rich dike of the Trinity ophiolite sheeted-dike complex shows three different magmatic pulses, probably injected in a short period of time (no well developed chilled margin) and important variations of the clinopyroxene and plagioclase percentages between its core (highly porphyritic) and margins (aphyric). This variation, interpreted as related to a flow differentiation phenomenon (mechanical phenocryst redistribution), has important geochemical consequences. It produces increases in the FeO, MgO, CaO, Cr and Ni contents from the margin to the core, together with increases in the clinopyroxene percentage, and decreases in the SiO2, Zr, Y, Nb and REE contents together with a decrease in the percentage of the fine-grained groundmass toward the core of the dike. This mineralogical redistribution, which also affects the incompatible trace element ratios because of the difference in plagioclase and clinopyroxene mineral/liquid partition coefficients, illustrate the importance of fractionation processes outside of a magma chamber. ?? 1991.

  15. Estimating the Effect of Health Insurance on Personal Prescription Drug Importation

    PubMed Central

    Zullo, Andrew R.; Howe, Chanelle J.; Galárraga, Omar

    2016-01-01

    Personal prescription drug importation occurs in the United States because of the high cost of U.S. medicines and lower cost of foreign equivalents. Importation carries a risk of exposure to counterfeit (i.e., falsified, fraudulent), adulterated, and substandard drugs. Inadequate health insurance may increase the risk of importation. We use inverse probability weighted marginal structural models and data on 87,494 individuals from the 2011-2013 National Health Interview Survey to estimate the marginal association between no health insurance and importation within U.S. subpopulations. The marginal prevalence difference [95% confidence limits] for those without (prevalence = 0.031) versus those with health insurance was 0.016 [0.011, 0.021]. The prevalence difference was higher among persons who were Hispanic, born in Latin America, Russia, or Europe, traveled to developing countries, and did not use the Internet to fill prescriptions or to find health information. Health insurance coverage may effectively reduce importation, especially among particular subpopulations. PMID:26837427

  16. Epidemics in interconnected small-world networks.

    PubMed

    Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong

    2015-01-01

    Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.

  17. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  18. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Horn, J. E.; Harter, T.

    2011-06-01

    Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.

  19. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

  20. RADC Multi-Dimensional Signal-Processing Research Program.

    DTIC Science & Technology

    1980-09-30

    Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image

  1. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. The nature of crustal reflectivity at the southwest Iberian margin

    NASA Astrophysics Data System (ADS)

    Buffett, G. G.; Torne, M.; Carbonell, R.; Melchiorre, M.; Vergés, J.; Fernàndez, M.

    2017-11-01

    Reprocessing of multi-channel seismic reflection data acquired over the northern margin of the Gulf of Cádiz (SW Iberian margin) places new constraints on the upper crustal structure of the Guadalquivir-Portimão Bank. The data presented have been processed with optimized stacking and interval velocity models, a better approach to multiple attenuation, preserved amplitude information to derive the nature of seismic reflectivity, and accurate time-to-depth conversion after migration. The reprocessed data reveal a bright upper crustal reflector just underneath the Paleozoic basement that spatially coincides with the local positive free-air gravity high called the Gulf of Cádiz Gravity High. To investigate the nature of this reflector and to decipher whether it could be associated with pieces of mantle material emplaced at upper crustal levels, we calculated its reflection coefficient and compared it to a buried high-density ultramafic body (serpentinized peridotite) at the Gorringe Bank. Its reflection coefficient ratio with respect to the sea floor differs by only 4.6% with that calculated for the high-density ultramafic body of the Gorringe Bank, while it differs by 35.8% compared to a drilled Miocene limestone unconformity. This means that the Gulf of Cádiz reflector has a velocity and/or density contrast similar to the peridotite at the Gorringe Bank. However, considering the depth at which it is found (between 2.0 and 4.0 km) and the available geological information, it seems unlikely that the estimated shortening from the Oligocene to present is sufficient to emplace pieces of mantle material at these shallow levels. Therefore, and despite the similarity in its reflection coefficient with the peridotites of the Gorringe Bank, our preferred interpretation is that the upper crustal Gulf of Cádiz reflector represents the seismic response of high-density intracrustal magmatic intrusions that may partially contribute to the Gulf of Cádiz Gravity High.

  3. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  4. Probability density function of non-reactive solute concentration in heterogeneous porous formations

    Treesearch

    Alberto Bellin; Daniele Tonina

    2007-01-01

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...

  5. Predictions of malaria vector distribution in Belize based on multispectral satellite data.

    PubMed

    Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J

    1996-03-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  6. Predictions of malaria vector distribution in Belize based on multispectral satellite data

    NASA Technical Reports Server (NTRS)

    Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.

    1996-01-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  7. Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation

    USGS Publications Warehouse

    Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.

    1998-01-01

    We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.

  8. Fit of cast commercially pure titanium and Ti-6Al-4V alloy crowns before and after marginal refinement by electrical discharge machining.

    PubMed

    Contreras, Edwin Fernando Ruiz; Henriques, Guilherme Elias Pessanha; Giolo, Suely Ruiz; Nobilo, Mauro Antonio Arruda

    2002-11-01

    Titanium has been suggested as a replacement for alloys currently used in single-tooth restorations and fixed partial dentures. However, difficulties in casting have resulted in incomplete margins and discrepancies in marginal fit. This study evaluated and compared the marginal fit of crowns fabricated from a commercially pure titanium (CP Ti) and from Ti-6Al-4V alloy with crowns fabricated from a Pd-Ag alloy that served as a control. Evaluations were performed before and after marginal refinement by electrical discharge machining (EDM). Forty-five bovine teeth were prepared to receive complete cast crowns. Stone and copper-plated dies were obtained from impressions. Fifteen crowns were cast with each alloy (CP Ti, Ti-6Al-4V, and Pd-Ag). Marginal fit measurements (in micrometers) were recorded at 4 reference points on each casting with a traveling microscope. Marginal refinement with EDM was conducted on the titanium-based crowns, and measurements were repeated. Data were analyzed with the Kruskal-Wallis test, paired t test, and independent t test at a 1% probability level. The Kruskal-Wallis test showed significant differences among mean values of marginal fit for the as-cast CP Ti crowns (mean [SD], 83.9 [26.1] microm) and the other groups: Ti-6Al-4V (50.8 [17.2] microm) and Pd-Ag (45.2 [10.4] microm). After EDM marginal refinement, significant differences were detected among the Ti-6Al-4V crowns (24.5 [10.9] microm) and the other 2 groups: CP Ti (50.6 [20.0] microm) and Pd-Ag (not modified by EDM). Paired t test results indicated that marginal refinement with EDM effectively improved the fit of CP Ti crowns (from 83.9 to 50.6 microm) and Ti-6Al-4V crowns (from 50.8 to 24.5 microm). However, the difference in improvement between the two groups was not significant by t test. Within the limitations of this study, despite the superior results for Ti-6Al-4V, both groups of titanium-based crowns had clinically acceptable marginal fits. After EDM marginal refinement, the fit of cast CP Ti and Ti-6Al-4V crowns improved significantly.

  9. Marginal Iodine Deficiency Affects Dendritic Spine Development by Disturbing the Function of Rac1 Signaling Pathway on Cytoskeleton.

    PubMed

    Min, Hui; Dong, Jing; Wang, Yi; Wang, Yuan; Yu, Ye; Shan, Zhongyan; Xi, Qi; Teng, Weiping; Chen, Jie

    2017-01-01

    Iodine deficiency (ID)-induced thyroid hormone (TH) insufficient during development leads to impairments of brain function, such as learning and memory. Marginal ID has been defined as subtle insufficiency of TH, characterized as low thyroxine (T 4 ) levels, whether marginal ID potentially had adverse effects on the development of hippocampus and the underlying mechanisms remain unclear. Thus, in the present study, we established Wistar rat models with ID diet during pregnancy and lactation. The effects of marginal ID on long-term potentiation (LTP) were investigated in the hippocampal CA1 region. To study the development of dendritic spines in pyramidal cells, Golgi-Cox staining was conducted on postnatal day (PN) 7, PN14, PN21, and PN28. The activation of Rac1 signaling pathway, which is essential for dendritic spine development by regulating actin cytoskeleton, was also investigated. Our results showed that marginal ID slightly reduced the field-excitatory postsynaptic potential (f-EPSP) slope and the population spike (PS) amplitude. Besides, the density of dendritic spines during the critical period of rat postnatal development was mildly decreased, and we found no significant change of spine morphology in marginal ID group. We also observed decreased activation of the Rac1 signaling pathway in pups subjected to maternal marginal ID. Our study may support the hypothesis that decreased T 4 induced by marginal ID results in slight impairments of LTP and leads to mild damage of dendritic spine development, which may be due to abnormal regulation of Rac1 signaling pathway on cytoskeleton.

  10. Deep structure of the continental margin and basin off Greater Kabylia, Algeria - New insights from wide-angle seismic data modeling and multichannel seismic interpretation

    NASA Astrophysics Data System (ADS)

    Aïdi, Chafik; Beslier, Marie-Odile; Yelles-Chaouche, Abdel Karim; Klingelhoefer, Frauke; Bracene, Rabah; Galve, Audrey; Bounif, Abdallah; Schenini, Laure; Hamai, Lamine; Schnurle, Philippe; Djellit, Hamou; Sage, Françoise; Charvis, Philippe; Déverchère, Jacques

    2018-03-01

    During the Algerian-French SPIRAL survey aimed at investigating the deep structure of the Algerian margin and basin, two coincident wide-angle and reflection seismic profiles were acquired in central Algeria, offshore Greater Kabylia, together with gravimetric, bathymetric and magnetic data. This 260 km-long offshore-onshore profile spans the Balearic basin, the central Algerian margin and the Greater Kabylia block up to the southward limit of the internal zones onshore. Results are obtained from modeling and interpretation of the combined data sets. The Algerian basin offshore Greater Kabylia is floored by a thin oceanic crust ( 4 km) with P-wave velocities ranging between 5.2 and 6.8 km/s. In the northern Hannibal High region, the atypical 3-layer crustal structure is interpreted as volcanic products stacked over a thin crust similar to that bordering the margin and related to Miocene post-accretion volcanism. These results support a two-step back-arc opening of the west-Algerian basin, comprising oceanic crust accretion during the first southward stage, and a magmatic and probably tectonic reworking of this young oceanic basement during the second, westward, opening phase. The structure of the central Algerian margin is that of a narrow ( 70 km), magma-poor rifted margin, with a wider zone of distal thinned continental crust than on the other margin segments. There is no evidence for mantle exhumation in the sharp ocean-continent transition, but transcurrent movements during the second opening phase may have changed its initial geometry. The Plio-Quaternary inversion of the margin related to ongoing convergence between Africa and Eurasia is expressed by a blind thrust system under the margin rising toward the surface at the slope toe, and by an isostatic disequilibrium resulting from opposite flexures of two plates decoupled at the continental slope. This disequilibrium is likely responsible for the peculiar asymmetrical shape of the crustal neck that may thus be a characteristic feature of inverted rifted margins.

  11. Curie Point Depth of the Iberian Peninsula and Surrounding Margins. A Thermal and Tectonic Perspective of its Evolution

    NASA Astrophysics Data System (ADS)

    Andrés, J.; Marzán, I.; Ayarza, P.; Martí, D.; Palomeras, I.; Torné, M.; Campbell, S.; Carbonell, R.

    2018-03-01

    In this work the thermal structure of the Iberian Peninsula is derived from magnetic data by calculating the bottom of the magnetization, assumed to be the Curie-point depth (CPD) isotherm, which accounts for the depth at which magnetite becomes paramagnetic (580°C). Comparison of the CPD with crustal thickness maps along with a heat flow map derived from the CPD provides new insights on the lithospheric thermal regime. Within Iberia, the CPD isotherm has thickness in the range of 17 to 29 km. This isotherm is shallow (<18 km) offshore, where the lithosphere is thinner. In continental Iberia, the NW Variscan domain presents a magnetic response that is most probably linked to thickening and later extension processes during the late Variscan Orogeny, which resulted in widespread crustal melting and emplacement of granites (in the Central Iberian Arc). The signature of the CPD at the Gibraltar Arc reveals a geometry consistent with the slab roll-back geodynamic model that shaped the western Mediterranean. In offshore areas, a broad extension of magnetized upper mantle is found. Serpentinization of the upper mantle, probably triggered in an extensional context, is proposed to account for the magnetic signal. The Atlantic margin presents up to 8 km of serpentinites, which, according to the identification of exhumed mantle, correlates with a hyperextended margin. The Mediterranean also presents generalized serpentinization up to 6 km in the Algerian Basin. Furthermore, a heat flow map and a Moho temperature map derived from the CPD are presented.

  12. On-line prognosis of fatigue crack propagation based on Gaussian weight-mixture proposal particle filter.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo

    2018-01-01

    Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  14. A New Statistic for Evaluating Item Response Theory Models for Ordinal Data. CRESST Report 839

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2014-01-01

    We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…

  15. Inclusion of Radiation Environment Variability in Total Dose Hardness Assurance Methodology

    PubMed Central

    Xapsos, M.A.; Stauffer, C.; Phan, A.; McClure, S.S.; Ladbury, R.L.; Pellish, J.A.; Campola, M.J.; LaBel, K.A.

    2017-01-01

    Variability of the space radiation environment is investigated with regard to parts categorization for total dose hardness assurance methods. It is shown that it can have a significant impact. A modified approach is developed that uses current environment models more consistently and replaces the radiation design margin concept with one of failure probability during a mission. PMID:28804156

  16. Tomographic measurement of joint photon statistics of the twin-beam quantum state

    PubMed

    Vasilyev; Choi; Kumar; D'Ariano

    2000-03-13

    We report the first measurement of the joint photon-number probability distribution for a two-mode quantum state created by a nondegenerate optical parametric amplifier. The measured distributions exhibit up to 1.9 dB of quantum correlation between the signal and idler photon numbers, whereas the marginal distributions are thermal as expected for parametric fluorescence.

  17. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  18. Organic-rich sediments in ventilated deep-sea environments: Relationship to climate, sea level, and trophic changes

    NASA Astrophysics Data System (ADS)

    Bertrand, P.; Pedersen, T. F.; Schneider, R.; Shimmield, G.; Lallier-Verges, E.; Disnar, J. R.; Massias, D.; Villanueva, J.; Tribovillard, N.; Huc, A. Y.; Giraud, X.; Pierre, C.; VéNec-Peyré, M.-T.

    2003-02-01

    Sediments on the Namibian Margin in the SE Atlantic between water depths of ˜1000 and ˜3600 m are highly enriched in hydrocarbon-prone organic matter. Such sedimentation has occurred for more than 2 million years and is geographically distributed over hundreds of kilometers along the margin, so that the sediments of this region contain a huge concentrated stock of organic carbon. It is shown here that most of the variability in organic content is due to relative dilution by buried carbonates. This reflects both export productivity and diagenetic dissolution, not differences in either water column or bottom water anoxia and related enhanced preservation of organic matter. These observations offer a new mechanism for the formation of potential source rocks in a well-ventilated open ocean, in this case the South Atlantic. The organic richness is discussed in terms of a suite of probable controls including local wind-driven productivity (upwelling), trophic conditions, transfer efficiency, diagenetic processes, and climate-related sea level and deep circulation. The probability of past occurrences of such organic-rich facies in equivalent oceanographic settings at the edge of large oceanic basins should be carefully considered in deep offshore exploration.

  19. Stochastic transport models for mixing in variable-density turbulence

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  20. A simple probabilistic model of initiation of motion of poorly-sorted granular mixtures subjected to a turbulent flow

    NASA Astrophysics Data System (ADS)

    Ferreira, Rui M. L.; Ferrer-Boix, Carles; Hassan, Marwan

    2015-04-01

    Initiation of sediment motion is a classic problem of sediment and fluid mechanics that has been studied at wide range of scales. By analysis at channel scale one means the investigation of a reach of a stream, sufficiently large to encompass a large number of sediment grains but sufficiently small not to experience important variations in key hydrodynamic variables. At this scale, and for poorly-sorted hydraulically rough granular beds, existing studies show a wide variation of the value of the critical Shields parameter. Such uncertainty constitutes a problem for engineering studies. To go beyond Shields paradigm for the study of incipient motion at channel scale this problem can be can be cast in probabilistic terms. An empirical probability of entrainment, which will naturally account for size-selective transport, can be calculated at the scale of the bed reach, using a) the probability density functions (PDFs) of the flow velocities {{f}u}(u|{{x}n}) over the bed reach, where u is the flow velocity and xn is the location, b) the PDF of the variability of competent velocities for the entrainment of individual particles, {{f}{{up}}}({{u}p}), where up is the competent velocity, and c) the concept of joint probability of entrainment and grain size. One must first divide the mixture in into several classes M and assign a correspondent frequency p_M. For each class, a conditional PDF of the competent velocity {{f}{{up}}}({{u}p}|M) is obtained, from the PDFs of the parameters that intervene in the model for the entrainment of a single particle: [ {{u}p}/√{g(s-1){{di}}}={{Φ }u}( { {{C}k} },{{{φ}k}},ψ,{{u}p/{di}}{{{ν}(w)}} )) ] where { Ck } is a set of shape parameters that characterize the non-sphericity of the grain, { φk} is a set of angles that describe the orientation of particle axes and its positioning relatively to its neighbours, ψ is the skin friction angle of the particles, {{{u}p}{{d}i}}/{{{ν}(w)}} is a particle Reynolds number, di is the sieving diameter of the particle, g is the acceleration of gravity and {{Φ }u} is a general function. For the same class, the probability density function of the instantaneous turbulent velocities {{f}u}(u|M) can be obtained from judicious laboratory or field work. From these probability densities, the empirical conditional probability of entrainment of class M is [ P(E|M)=int-∞ +∞ {P(u>{{u}p}|M) {{f}{{up}}}({{u}p}|M)d{{u}p}} ] where P(u>{{u}p}|M)=int{{up}}+∞ {{{f}u}(u|M)du}. Employing a frequentist interpretation of probability, in an actual bed reach subjected to a succession of N (turbulent) flows, the above equation states that the fraction N P(E|M) is the number of flows in which the grains of class M are entrained. The joint probability of entrainment and class M is given by the product P(E|M){{p}M}. Hence, the channel scale empirical probability of entrainment is the marginal probability [ P(E)=sumlimitsM{P(E|M){{p}M}} ] since the classes M are mutually exclusive. Fractional bedload transport rates can be obtained from the probability of entrainment through [ {{q}s_M}={{E}M}{{ℓ }s_M} ] where {{q}s_M} is the bedload discharge in volume per unit width of size fraction M, {{E}M} is the entrainment rate per unit bed area of that size fraction, calculated from the probability of entrainment as {{E}M}=P(E|M){{p}M}(1-&lambda )d/(2T) where d is a characteristic diameter of grains on the bed surface, &lambda is the bed porosity, T is the integral length scale of the longitudinal velocity at the elevation of crests of the roughness elements and {{ℓ }s_M} is the mean displacement length of class M. Fractional transport rates were computed and compared with experimental data, determined from bedload samples collected in a 12 m long 40 cm wide channel under uniform flow conditions and sediment recirculation. The median diameter of the bulk bed mixture was 3.2 mm and the geometric standard deviation was 1.7. Shields parameters ranged from 0.027 and 0.067 while the boundary Reynolds number ranged between 220 and 376. Instantaneous velocities were measured with 2-component Laser Doppler Anemometry. The results of the probabilist model exhibit a general good agreement with the laboratory data. However the probability of entrainment of the smallest size fractions is systematically underestimated. This may be caused by phenomena that is absent from the model, for instance the increased magnitude of hydrodynamic actions following the displacement of a larger sheltering grain and the fact that the collective entrainment of smaller grains following one large turbulent event is not accounted for. This work was partially funded by FEDER, program COMPETE, and by national funds through Portuguese Foundation for Science and Technology (FCT) project RECI/ECM-HID/0371/2012.

  1. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  2. Public-Interest and Level-of-Evidence Considerations in Cold Fusion Public Policy

    NASA Astrophysics Data System (ADS)

    Grinshaw, Thomas

    2008-03-01

    Cold fusion (CF) protagonists and antagonists would no doubt agree that scientific processes have been challenged in the CF case. The public interest in CF turns on two questions: What are the potential benefits? What is the probability that CF is ``real''? Potential benefits have been agreed on since CF announcement in 1989. The probability of CF reality may be assessed based on level of evidence (LoE): preponderance of evidence (PoE); clear and convincing evidence (CCE); and beyond a reasonable doubt (BRD). PoE, from civil law, indicates a probability of 50% or higher. BRD, from criminal law, has a probability approaching 90%. CCE, in between, thus has a 70-75% probability. CF experimental evidence, based on: 1) initial affirmations, 2) the large number of corroborations since marginalization, and 3) particularly demonstrative experiments, reasonably indicates at least a PoE level of evidence for excess heat. A case can also be made for a CCE (but probably not for a BRD) LoE. In either the PoE or CCE scenario a clear need is demonstrated for change in policy toward CR, given its potential benefits to humanity.

  3. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  4. Oxygen isotope variations at the margin of a CAI records circulation within the solar nebula.

    PubMed

    Simon, Justin I; Hutcheon, Ian D; Simon, Steven B; Matzel, Jennifer E P; Ramon, Erick C; Weber, Peter K; Grossman, Lawrence; DePaolo, Donald J

    2011-03-04

    Micrometer-scale analyses of a calcium-, aluminum-rich inclusion (CAI) and the characteristic mineral bands mantling the CAI reveal that the outer parts of this primitive object have a large range of oxygen isotope compositions. The variations are systematic; the relative abundance of (16)O first decreases toward the CAI margin, approaching a planetary-like isotopic composition, then shifts to extremely (16)O-rich compositions through the surrounding rim. The variability implies that CAIs probably formed from several oxygen reservoirs. The observations support early and short-lived fluctuations of the environment in which CAIs formed, either because of transport of the CAIs themselves to distinct regions of the solar nebula or because of varying gas composition near the proto-Sun.

  5. Deep-sea biostratigraphy of prograding platform margins (Neogene, Bahamas): key evidence linked to depositional rhythm

    USGS Publications Warehouse

    Lidz, B.H.; McNeill, D.F.

    1995-01-01

    New foraminiferal evidence from two boreholes on the paleoshelf and slope of western Great Bahama Bank has wide-ranging implications for understanding formation and evolution of carbonate-platform margins. The new data, abundant well-preserved planktic foraminifera, were obtained by disaggregating samples from intercalated pelagic layers and selected parts of thick hemipelagic limestone. The new data define six units in one hole and seven in the other, bracket the biozones present and their ages, indicate different sedimentation rates, and show that within the limits of biostratigraphic resolution the biozones are correlative between the holes. Most importantly, the revised ages show that the paleoshelf borehole probably penetrated the late Miocene rather than middle Miocene. -from Authors

  6. Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows

    NASA Astrophysics Data System (ADS)

    Minier, Jean-Pierre; Profeta, Christophe

    2015-11-01

    This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems and guidelines are formulated to emphasize the key role played by the notion of slow and fast variables.

  7. Effect of the Large Scale Environment on the Internal Dynamics of Early-Type Galaxies

    NASA Astrophysics Data System (ADS)

    Maubon, G.; Prugniel, Ph.

    We have studied the population-density relation in very sparse environments, from poor clusters to isolated galaxies, and we find that early-type galaxies with a young stellar population are preferably found in the lowest density environments. We show a marginal indication that this effect is due to an enhancement of the stellar formation independent of the morphological segregation, but we failed to find any effect from the internal dynamics.

  8. Spatial and temporal variability of macroinvertebrates in spawning and non-spawning habitats during a salmon run in Southeast Alaska.

    PubMed

    Campbell, Emily Y; Merritt, Richard W; Cummins, Kenneth W; Benbow, M Eric

    2012-01-01

    Spawning salmon create patches of disturbance through redd digging which can reduce macroinvertebrate abundance and biomass in spawning habitat. We asked whether displaced invertebrates use non-spawning habitats as refugia in streams. Our study explored how the spatial and temporal distribution of macroinvertebrates changed during a pink salmon (Oncorhynchus gorbuscha) spawning run and compared macroinvertebrates in spawning (riffle) and non-spawning (refugia) habitats in an Alaskan stream. Potential refugia included: pools, stream margins and the hyporheic zone, and we also sampled invertebrate drift. We predicted that macroinvertebrates would decline in riffles and increase in drift and refugia habitats during salmon spawning. We observed a reduction in the density, biomass and taxonomic richness of macroinvertebrates in riffles during spawning. There was no change in pool and margin invertebrate communities, except insect biomass declined in pools during the spawning period. Macroinvertebrate density was greater in the hyporheic zone and macroinvertebrate density and richness increased in the drift during spawning. We observed significant invertebrate declines within spawning habitat; however in non-spawning habitat, there were less pronounced changes in invertebrate density and richness. The results observed may be due to spawning-related disturbances, insect phenology, or other variables. We propose that certain in-stream habitats could be important for the persistence of macroinvertebrates during salmon spawning in a Southeast Alaskan stream.

  9. Spatial and Temporal Variability of Macroinvertebrates in Spawning and Non-Spawning Habitats during a Salmon Run in Southeast Alaska

    PubMed Central

    Campbell, Emily Y.; Merritt, Richard W.; Cummins, Kenneth W.; Benbow, M. Eric

    2012-01-01

    Spawning salmon create patches of disturbance through redd digging which can reduce macroinvertebrate abundance and biomass in spawning habitat. We asked whether displaced invertebrates use non-spawning habitats as refugia in streams. Our study explored how the spatial and temporal distribution of macroinvertebrates changed during a pink salmon (Oncorhynchus gorbuscha) spawning run and compared macroinvertebrates in spawning (riffle) and non-spawning (refugia) habitats in an Alaskan stream. Potential refugia included: pools, stream margins and the hyporheic zone, and we also sampled invertebrate drift. We predicted that macroinvertebrates would decline in riffles and increase in drift and refugia habitats during salmon spawning. We observed a reduction in the density, biomass and taxonomic richness of macroinvertebrates in riffles during spawning. There was no change in pool and margin invertebrate communities, except insect biomass declined in pools during the spawning period. Macroinvertebrate density was greater in the hyporheic zone and macroinvertebrate density and richness increased in the drift during spawning. We observed significant invertebrate declines within spawning habitat; however in non-spawning habitat, there were less pronounced changes in invertebrate density and richness. The results observed may be due to spawning-related disturbances, insect phenology, or other variables. We propose that certain in-stream habitats could be important for the persistence of macroinvertebrates during salmon spawning in a Southeast Alaskan stream. PMID:22745724

  10. A new exact method for line radiative transfer

    NASA Astrophysics Data System (ADS)

    Elitzur, Moshe; Asensio Ramos, Andrés

    2006-01-01

    We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.

  11. A paleolatitude reconstruction of the South Armenian Block (Lesser Caucasus) for the Late Cretaceous: Constraints on the Tethyan realm

    NASA Astrophysics Data System (ADS)

    Meijers, Maud J. M.; Smith, Brigitte; Kirscher, Uwe; Mensink, Marily; Sosson, Marc; Rolland, Yann; Grigoryan, Araik; Sahakyan, Lilit; Avagyan, Ara; Langereis, Cor; Müller, Carla

    2015-03-01

    The continental South Armenian Block - part of the Anatolide-Tauride South Armenian microplate - of Gondwana origin rifted from the African margin after the Triassic and collided with the Eurasian margin after the Late Cretaceous. During the Late Cretaceous, two northward dipping subduction zones were simultaneously active in the northern Neo-Tethys between the South Armenian Block in the south and the Eurasian margin in the north: oceanic subduction took place below the continental Eurasian margin and intra-oceanic subduction resulted in ophiolite obduction onto the South Armenian Block in the Late Cretaceous. The paleolatitude position of the South Armenian Block before its collision with Eurasia within paleogeographic reconstructions is poorly determined and limited to one study. This earlier study places the South Armenian Block at the African margin in the Early Jurassic. To reconstruct the paleolatitude history of the South Armenian Block, we sampled Upper Devonian-Permian and Cretaceous sedimentary rocks in Armenia. The sampled Paleozoic rocks have likely been remagnetized. Results from two out of three sites sampled in Upper Cretaceous strata pass fold tests and probably all three carry a primary paleomagnetic signal. The sampled sedimentary rocks were potentially affected by inclination shallowing. Therefore, two sites that consist of a large number of samples (> 100) were corrected for inclination shallowing using the elongation/inclination method. These are the first paleomagnetic data that quantify the South Armenian Block's position in the Tethys ocean between post-Triassic rifting from the African margin and post-Cretaceous collision with Eurasia. A locality sampled in Lower Campanian Eurasian margin sedimentary rocks and corrected for inclination shallowing, confirms that the corresponding paleolatitude falls on the Eurasian paleolatitude curve. The north-south distance between the South Armenian Block and the Eurasian margin just after Coniacian-Santonian ophiolite obduction was at most 1000 km.

  12. Natural constraints on exploring Antarctica's continental margin, existing geophysical and geological data basis, and proposed drilling program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, J.B.

    1987-05-01

    There have been a number of multichannel seismic reflection and seismic refraction surveys of the Antarctic continental shelf. While glacial erosion has left acoustic basement exposed on portions of the inner shelf, thick sedimentary sequences occur on the passive margin of east Antarctica. The thickness and age of these strata vary due to different breakup histories of the margin. Several sedimentary basins have been identified. Most are rift basins formed during the early stages of Antarctica's separation from other Gondwana continents and plateaus. The west Antarctic continental shelf is extensive, being approximately twice the size of the Gulf of Mexicomore » shelf. It has been poorly surveyed to date, owing mainly to its perennial sea ice cover. Gradual subduction of the spreading center from south to north along the margin resulted in old active margin sequences being buried beneath passive margin sequences. The latter should increase in thickness from north to south along the margin although no data bear this out. Hydrocarbon potential on the northern portion of the west Antarctic margin is considered low due to a probable lack of reservoir rocks. Establishment of ice sheets on Antarctica caused destruction of land vegetation and greatly restricted siliciclastic sand-producing environments. So only sedimentary basins which contain pre-early Miocene deposits have good hydrocarbon prospectivity. The Antarctic continental shelf is the deepest in the world, averaging 500 m and in places being more than a kilometer deep. The shelf has been left rugged by glacial erosion and is therefore prone to sediment mass movement. Widespread sediment gravity flow deposits attest to this. The shelf is covered with sea ice most of the year and in a few areas throughout the year. Icebergs, drift freely in the deep waters of the shelf; drift speeds of 1 to 2.5 km/year are not uncommon.« less

  13. Formation and evolution of magma-poor margins, an example of the West Iberia margin

    NASA Astrophysics Data System (ADS)

    Perez-Gussinye, Marta; Andres-Martinez, Miguel; Morgan, Jason P.; Ranero, Cesar R.; Reston, Tim

    2016-04-01

    The West Iberia-Newfoundland (WIM-NF) conjugate margins have been geophysically and geologically surveyed for the last 30 years and have arguably become a paradigm for magma-poor extensional margins. Here we present a coherent picture of the WIM-NF rift to drift evolution that emerges from these observations and numerical modeling, and point out important differences that may exist with other magma-poor margins world-wide. The WIM-NF is characterized by a continental crust that thins asymmetrically and a wide and symmetric continent-ocean transition (COT) interpreted to consist of exhumed and serpentinised mantle with magmatic products increasing oceanward. The architectural evolution of these margins is mainly dominated by cooling under very slow extension velocities (<~6 mm/yr half-rate) and a lower crust that most probably was not extremely weak at the start of rifting. These conditions lead to a system where initially deformation is distributed over a broad area and the upper, lower crust and lithosphere are decoupled. As extension progresses upper, lower, crust and mantle become tightly coupled and deformation localizes due to strengthening and cooling during rifting. Coupling leads to asymmetric asthenospheric uplift and weakening of the hanginwall of the active fault, where a new fault forms. This continued process leads to the formation of an array of sequential faults that dip and become younger oceanward. Here we show that these processes acting in concert: 1) reproduce the margin asymmetry observed at the WIM-NF, 2) explain the fault geometry evolution from planar, to listric to detachment like by having one common Andersonian framework, 3) lead to the symmetric exhumation of mantle with little magmatism, and 4) explain the younging of the syn-rift towards the basin centre and imply that unconformities separating syn- and post-rift may be diachronous and younger towards the ocean. Finally, we show that different lower crustal rheologies lead to different patterns of extension and to an abrupt transition to oceanic crust, even at magma-poor margins.

  14. Performance Evaluation of LDPC Coding and Iterative Decoding System in BPM R/W Channel Affected by Head Field Gradient, Media SFD and Demagnetization Field

    NASA Astrophysics Data System (ADS)

    Nakamura, Yasuaki; Okamoto, Yoshihiro; Osawa, Hisashi; Aoi, Hajime; Muraoka, Hiroaki

    We evaluate the performance of the write-margin for the low-density parity-check (LDPC) coding and iterative decoding system in the bit-patterned media (BPM) R/W channel affected by the write-head field gradient, the media switching field distribution (SFD), the demagnetization field from adjacent islands and the island position deviation. It is clarified that the LDPC coding and iterative decoding system in R/W channel using BPM at 3 Tbit/inch2 has a write-margin of about 20%.

  15. Attenuated associations between increasing BMI and unfavorable lipid profiles in Chinese Buddhist vegetarians.

    PubMed

    Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun

    2013-01-01

    Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.

  16. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  17. A partial differential equation for pseudocontact shift.

    PubMed

    Charnock, G T P; Kuprov, Ilya

    2014-10-07

    It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.

  18. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  19. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  20. The role of demographic compensation theory in incidental take assessments for endangered species

    USGS Publications Warehouse

    McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts

    2011-01-01

    Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.

  1. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  2. Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps

    DOE Data Explorer

    Adam Brandt

    2015-11-15

    This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.

  3. Pleural pressure theory revisited: a role for capillary equilibrium

    PubMed Central

    Caruana-Gauci, Roberto; Manche, Alexander; Gauci, Marilyn; Chetcuti, Stanley; Bertolaccini, Luca

    2017-01-01

    Background Theories elucidating pleural pressures should explain all observations including the equal and opposite recoil of the chest wall and lungs, the less than expected pleural hydrostatic gradient and its variation at lobar margins, why pleural pressures are negative and how pleural fluid circulation functions. Methods A theoretical model describing equilibrium between buoyancy, hydrostatic forces, and capillary forces is proposed. The capillary equilibrium model described depends on control of pleural fluid volume and protein content, powered by an active pleural pump. Results The interaction between buoyancy forces, hydrostatic pressure and capillary pressure was calculated, and values for pleural thickness and pressure were determined using values for surface tension, contact angle, pleural fluid and lung densities found in the literature. Modelling can explain the issue of the differing hydrostatic vertical pleural pressure gradient at the lobar margins for buoyancy forces between the pleural fluid and the lung floating in the pleural fluid according to Archimedes’ hydrostatic paradox. The capillary equilibrium model satisfies all salient requirements for a pleural pressure model, with negative pressures maximal at the apex, equal and opposite forces in the lung and chest wall, and circulatory pump action. Conclusions This model predicts that pleural effusions cannot occur in emphysema unless concomitant heart failure increases lung density. This model also explains how the non-confluence of the lung with the chest wall (e.g., lobar margins) makes the pleural pressure more negative, and why pleural pressures would be higher after an upper lobectomy compared to a lower lobectomy. Pathological changes in pleural fluid composition and lung density alter the equilibrium between capillarity and buoyancy hydrostatic pressure to promote pleural effusion formation. PMID:28523153

  4. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  5. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  6. Information Density and Syntactic Repetition.

    PubMed

    Temperley, David; Gildea, Daniel

    2015-11-01

    In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kastner, S.O.; Bhatia, A.K.

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less

  8. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  9. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems.

    PubMed

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  10. The risks and returns of stock investment in a financial market

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Mei, Dong-Cheng

    2013-03-01

    The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.

  11. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  12. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  13. Characterization of nonGaussian atmospheric turbulence for prediction of aircraft response statistics

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1977-01-01

    Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.

  14. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  15. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  16. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  17. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  18. Spatial and temporal Brook Trout density dynamics: Implications for conservation, management, and monitoring

    USGS Publications Warehouse

    Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,

    2014-01-01

    Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.

  19. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  20. Structure and degree of magmatism of North and South Atlantic rifted margins

    NASA Astrophysics Data System (ADS)

    Faleide, Jan Inge; Breivik, Asbjørn J.; Blaich, Olav A.; Tsikalas, Filippos; Planke, Sverre; Mansour Abdelmalak, Mohamed; Mjelde, Rolf; Myklebust, Reidun

    2014-05-01

    The structure and evolution of conjugate rifted margins in the South and North Atlantic have been studied mainly based on seismic reflection and refraction profiles, complemented by potential field data and plate reconstructions. All margins exhibit distinct along-margin structural and magmatic changes reflecting both structural inheritance extending back to a complex pre-breakup geological history and the final breakup processes. The sedimentary basins at the conjugate margins developed as a result of multiple phases of rifting, associated with complex time-dependent thermal structure of the lithosphere. A series of conjugate crustal transects reveal tectonomagmatic asymmetry, both along-strike and across the conjugate margin systems. The continent-ocean transitional domain along the magma-dominated margin segments is characterized by a large volume of flood basalts and high-velocity/high-density lower crust emplaced during and after continental breakup. Both the volume and duration of excess magmatism varies. The extrusive and intrusive complexes make it difficult to pin down a COB to be used in plate reconstructions. The continent-ocean transition is usually well defined as a rapid increase of P-wave velocities at mid- to lower crustal levels. The transition is further constrained by comparing the mean P-wave velocity to the thickness of the crystalline crust. By this comparison we can also address the magmatic processes associated with breakup, whether they are convection dominated or temperature dominated. In the NE Atlantic there is a strong correlation between magma productivity and early plate spreading rate, suggesting a common cause. A model for the breakup-related magmatism should be able to explain this correlation, but also the magma production peak at breakup, the along-margin magmatic segmentation, and the active mantle upwelling. It is likely that mantle plumes (Iceland in the NE Atlantic, Tristan da Cunha in the South Atlantic) may have influenced the volume of magmatism but they did not necessarily alter the process of rifted margin formation, implying that parts of the margins may have much in common with more magma-poor margins. Conjugate margin segments from the North and South Atlantic will be compared and discussed with particular focus on the tectonomagmatic processes associated with continental breakup.

Top