Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
ERIC Educational Resources Information Center
Balasooriya, Uditha; Li, Jackie; Low, Chan Kee
2012-01-01
For any density function (or probability function), there always corresponds a "cumulative distribution function" (cdf). It is a well-known mathematical fact that the cdf is more general than the density function, in the sense that for a given distribution the former may exist without the existence of the latter. Nevertheless, while the…
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
Goal-Oriented Probability Density Function Methods for Uncertainty Quantification
2015-12-11
approximations or data-driven approaches. We investigated the accuracy of analytical tech- niques based Kubo -Van Kampen operator cumulant expansions for...analytical techniques based Kubo -Van Kampen operator cumulant expansions for Langevin equations driven by fractional Brownian motion and other noises
Electrofishing capture probability of smallmouth bass in streams
Dauwalter, D.C.; Fisher, W.L.
2007-01-01
Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
NASA Astrophysics Data System (ADS)
Wellons, Sarah; Torrey, Paul
2017-06-01
Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.
Computing thermal Wigner densities with the phase integration method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beutier, J.; Borgis, D.; Vuilleumier, R.
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
Survival potential of Phytophthora infestans sporangia in relation to meteorological factors
USDA-ARS?s Scientific Manuscript database
Assessment of meteorological factors coupled with sporangia survival curves may enhance effective management of potato late blight, caused by Phytophthora infestans. We utilized a non-parametric density estimation approach to evaluate the cumulative probability of occurrence of temperature and relat...
A Seakeeping Performance and Affordability Tradeoff Study for the Coast Guard Offshore Patrol Cutter
2016-06-01
Index Polar Plot for Sea State 4, All Headings Are Relative to the Wave Motion and Velocity is Given in Meters per Second...40 Figure 15. Probability and Cumulative Density Functions of Annual Sea State Occurrences in the Open Ocean, North Pacific...criteria at a given sea state. Probability distribution functions are available that describe the likelihood that an operational area will experience
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Back in the saddle: large-deviation statistics of the cosmic log-density field
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.
2016-08-01
We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Spectrum sensing based on cumulative power spectral density
NASA Astrophysics Data System (ADS)
Nasser, A.; Mansour, A.; Yao, K. C.; Abdallah, H.; Charara, H.
2017-12-01
This paper presents new spectrum sensing algorithms based on the cumulative power spectral density (CPSD). The proposed detectors examine the CPSD of the received signal to make a decision on the absence/presence of the primary user (PU) signal. Those detectors require the whiteness of the noise in the band of interest. The false alarm and detection probabilities are derived analytically and simulated under Gaussian and Rayleigh fading channels. Our proposed detectors present better performance than the energy (ED) or the cyclostationary detectors (CSD). Moreover, in the presence of noise uncertainty (NU), they are shown to provide more robustness than ED, with less performance loss. In order to neglect the NU, we modified our algorithms to be independent from the noise variance.
NASA Technical Reports Server (NTRS)
Lanzi, R. James; Vincent, Brett T.
1993-01-01
The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.
Modeling pore corrosion in normally open gold- plated copper connectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Battaile, Corbett Chandler; Moffat, Harry K.; Sun, Amy Cha-Tien
2008-09-01
The goal of this study is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10 ppb H{sub 2}S at 30 C and a relative humidity of 70%. This environment accelerates the attack normally observed in a light industrial environment (essentially a simplified version of the Battelle Class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the macroscopic electrical resistance of the aged surface as a function of exposure time. A pore corrosion numerical model was used to predict bothmore » the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to complete the numerical model. Comparisons are made to the experimentally observed number density of corrosion sites, the size distribution of corrosion product blooms, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area along with a probability for bloom-growth extinction proportional to the corrosion product bloom volume. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the electrical contact resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms has been weighted more heavily.« less
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
NASA Astrophysics Data System (ADS)
Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe
2016-08-01
In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
Statistical properties of two sine waves in Gaussian noise.
NASA Technical Reports Server (NTRS)
Esposito, R.; Wilson, L. R.
1973-01-01
A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).
Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advectivemore » dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.« less
Higher-order cumulants and spectral kurtosis for early detection of subterranean termites
NASA Astrophysics Data System (ADS)
de la Rosa, Juan José González; Moreno Muñoz, Antonio
2008-02-01
This paper deals with termite detection in non-favorable SNR scenarios via signal processing using higher-order statistics. The results could be extrapolated to all impulse-like insect emissions; the situation involves non-destructive termite detection. Fourth-order cumulants in time and frequency domains enhance the detection and complete the characterization of termite emissions, non-Gaussian in essence. Sliding higher-order cumulants offer distinctive time instances, as a complement to the sliding variance, which only reveal power excesses in the signal; even for low-amplitude impulses. The spectral kurtosis reveals non-Gaussian characteristics (the peakedness of the probability density function) associated to these non-stationary measurements, specially in the near ultrasound frequency band. Contrasted estimators have been used to compute the higher-order statistics. The inedited findings are shown via graphical examples.
NASA Astrophysics Data System (ADS)
Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan
2017-10-01
Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.
de la Fuente, Jaime; Garrett, C Gaelyn; Ossoff, Robert; Vinson, Kim; Francis, David O; Gelbard, Alexander
2017-11-01
To examine the distribution of clinic and operative pathology in a tertiary care laryngology practice. Probability density and cumulative distribution analyses (Pareto analysis) was used to rank order laryngeal conditions seen in an outpatient tertiary care laryngology practice and those requiring surgical intervention during a 3-year period. Among 3783 new clinic consultations and 1380 operative procedures, voice disorders were the most common primary diagnostic category seen in clinic (n = 3223), followed by airway (n = 374) and swallowing (n = 186) disorders. Within the voice strata, the most common primary ICD-9 code used was dysphonia (41%), followed by unilateral vocal fold paralysis (UVFP) (9%) and cough (7%). Among new voice patients, 45% were found to have a structural abnormality. The most common surgical indications were laryngotracheal stenosis (37%), followed by recurrent respiratory papillomatosis (18%) and UVFP (17%). Nearly 55% of patients presenting to a tertiary referral laryngology practice did not have an identifiable structural abnormality in the larynx on direct or indirect examination. The distribution of ICD-9 codes requiring surgical intervention was disparate from that seen in clinic. Application of the Pareto principle may improve resource allocation in laryngology, but these initial results require confirmation across multiple institutions.
Derivation of Hunt equation for suspension distribution using Shannon entropy theory
NASA Astrophysics Data System (ADS)
Kundu, Snehasis
2017-12-01
In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.
Decision analysis with cumulative prospect theory.
Bayoumi, A M; Redelmeier, D A
2000-01-01
Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.
NASA Technical Reports Server (NTRS)
Hess, Paul C.; Parmentier, E. M.
1995-01-01
Crystallization of the lunar magma ocean creates a chemically stratified Moon consisting of an anorthositic crust and magma ocean cumulates overlying the primitive lunar interior. Within the magma ocean cumulates the last liquids to crystallize form dense, ilmenite-rich cumulates that contain high concentrations of incompatible radioactive elements. The underlying olivine-orthopyroxene cumulates are also stratified with later crystallized, denser, more Fe-rich compositions at the top. This paper explores the chemical and thermal consequences of an internal evolution model accounting for the possible role of these sources of chemical buoyancy. Rayleigh-Taylor instability causes the dense ilmenite-rich cumulate layer and underlying Fe-rich cumulates to sink toward the center of the Moon, forming a dense lunar core. After this overturn, radioactive heating within the ilmenite-rich cumulate core heats the overlying mantle, causing it to melt. In this model, the source region for high-TiO2 mare basalts is a convectively mixed layer above the core-mantle boundary which would contain small and variable amounts of admixed ilmenite and KREEP. This deep high-pressure melting, as required for mare basalts, occurs after a reasonable time interval to explain the onset of mare basalt volcanism if the content of radioactive elements in the core and the chemical density gradients above the core are sufficiently high but within a range of values that might have been present in the Moon. Regardless of details implied by particular model parameters, gravitational overturn driven by the high density of magma ocean Fe-rich cumulates should concentrate high-TiO2 mare basalt sources, and probably a significant fraction of radioactive heating, toward the center of the Moon. This will have important implications for both the thermal evolution of the Moon and for mare basalt genesis.
N -tag probability law of the symmetric exclusion process
NASA Astrophysics Data System (ADS)
Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb
2018-06-01
The symmetric exclusion process (SEP), in which particles hop symmetrically on a discrete line with hard-core constraints, is a paradigmatic model of subdiffusion in confined systems. This anomalous behavior is a direct consequence of strong spatial correlations induced by the requirement that the particles cannot overtake each other. Even if this fact has been recognized qualitatively for a long time, up to now there has been no full quantitative determination of these correlations. Here we study the joint probability distribution of an arbitrary number of tagged particles in the SEP. We determine analytically its large-time limit for an arbitrary density of particles, and its full dynamics in the high-density limit. In this limit, we obtain the time-dependent large deviation function of the problem and unveil a universal scaling form shared by the cumulants.
Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.
2011-01-01
Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.
A review of contemporary methods for the presentation of scientific uncertainty.
Makinson, K A; Hamby, D M; Edwards, J A
2012-12-01
Graphic methods for displaying uncertainty are often the most concise and informative way to communicate abstract concepts. Presentation methods currently in use for the display and interpretation of scientific uncertainty are reviewed. Numerous subjective and objective uncertainty display methods are presented, including qualitative assessments, node and arrow diagrams, standard statistical methods, box-and-whisker plots,robustness and opportunity functions, contribution indexes, probability density functions, cumulative distribution functions, and graphical likelihood functions.
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
Singh, Deependra; Pitkäniemi, Janne; Malila, Nea; Anttila, Ahti
2016-09-01
Mammography has been found effective as the primary screening test for breast cancer. We estimated the cumulative probability of false positive screening test results with respect to symptom history reported at screen. A historical prospective cohort study was done using individual screening data from 413,611 women aged 50-69 years with 2,627,256 invitations for mammography screening between 1992 and 2012 in Finland. Symptoms (lump, retraction, and secretion) were reported at 56,805 visits, and 48,873 visits resulted in a false positive mammography result. Generalized linear models were used to estimate the probability of at least one false positive test and true positive at screening visits. The estimates were compared among women with and without symptoms history. The estimated cumulative probabilities were 18 and 6 % for false positive and true positive results, respectively. In women with a history of a lump, the cumulative probabilities of false positive test and true positive were 45 and 16 %, respectively, compared to 17 and 5 % with no reported lump. In women with a history of any given symptom, the cumulative probabilities of false positive test and true positive were 38 and 13 %, respectively. Likewise, women with a history of a 'lump and retraction' had the cumulative false positive probability of 56 %. The study showed higher cumulative risk of false positive tests and more cancers detected in women who reported symptoms compared to women who did not report symptoms at screen. The risk varies substantially, depending on symptom types and characteristics. Information on breast symptoms influences the balance of absolute benefits and harms of screening.
McCauley, Erin J
2017-12-01
To estimate the cumulative probability (c) of arrest by age 28 years in the United States by disability status, race/ethnicity, and gender. I estimated cumulative probabilities through birth cohort life tables with data from the National Longitudinal Survey of Youth, 1997. Estimates demonstrated that those with disabilities have a higher cumulative probability of arrest (c = 42.65) than those without (c = 29.68). The risk was disproportionately spread across races/ethnicities, with Blacks with disabilities experiencing the highest cumulative probability of arrest (c = 55.17) and Whites without disabilities experiencing the lowest (c = 27.55). Racial/ethnic differences existed by gender as well. There was a similar distribution of disability types across race/ethnicity, suggesting that the racial/ethnic differences in arrest may stem from racial/ethnic inequalities as opposed to differential distribution of disability types. The experience of arrest for those with disabilities was higher than expected. Police officers should understand how disabilities may affect compliance and other behaviors, and likewise how implicit bias and structural racism may affect reactions and actions of officers and the systems they work within in ways that create inequities.
Transport of polar and non-polar solvents through a carbon nanotube
NASA Astrophysics Data System (ADS)
Chopra, Manish; Phatak, Rohan; Choudhury, N.
2013-02-01
Transport of water through narrow pores is important in chemistry, biology and material science. In this work, we employ atomistic molecular dynamics (MD) simulations to carry out a comparative study of the transport of a polar and a non-polar solvent through a carbon nanotube (CNT). The flow of water as well as methane through the nanotube is estimated in terms of number of translocation events and is compared. Transport events occurred in bursts of unidirectional translocation pulses in both the cases. Probability density and cumulative probability distribution functions are obtained for the translocated particles and particles coming out from same side with respect to the time they spent in the nano channel.
Probability of stress-corrosion fracture under random loading
NASA Technical Reports Server (NTRS)
Yang, J. N.
1974-01-01
Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.
Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels
NASA Astrophysics Data System (ADS)
Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan
2017-12-01
This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.
Rapidity window dependences of higher order cumulants and diffusion master equation
NASA Astrophysics Data System (ADS)
Kitazawa, Masakiyo
2015-10-01
We study the rapidity window dependences of higher order cumulants of conserved charges observed in relativistic heavy ion collisions. The time evolution and the rapidity window dependence of the non-Gaussian fluctuations are described by the diffusion master equation. Analytic formulas for the time evolution of cumulants in a rapidity window are obtained for arbitrary initial conditions. We discuss that the rapidity window dependences of the non-Gaussian cumulants have characteristic structures reflecting the non-equilibrium property of fluctuations, which can be observed in relativistic heavy ion collisions with the present detectors. It is argued that various information on the thermal and transport properties of the hot medium can be revealed experimentally by the study of the rapidity window dependences, especially by the combined use, of the higher order cumulants. Formulas of higher order cumulants for a probability distribution composed of sub-probabilities, which are useful for various studies of non-Gaussian cumulants, are also presented.
NASA Astrophysics Data System (ADS)
Liu, Y.; Weisberg, R. H.
2017-12-01
The Lagrangian separation distance between the endpoints of simulated and observed drifter trajectories is often used to assess the performance of numerical particle trajectory models. However, the separation distance fails to indicate relative model performance in weak and strong current regions, such as a continental shelf and its adjacent deep ocean. A skill score is proposed based on the cumulative Lagrangian separation distances normalized by the associated cumulative trajectory lengths. The new metrics correctly indicates the relative performance of the Global HYCOM in simulating the strong currents of the Gulf of Mexico Loop Current and the weaker currents of the West Florida Shelf in the eastern Gulf of Mexico. In contrast, the Lagrangian separation distance alone gives a misleading result. Also, the observed drifter position series can be used to reinitialize the trajectory model and evaluate its performance along the observed trajectory, not just at the drifter end position. The proposed dimensionless skill score is particularly useful when the number of drifter trajectories is limited and neither a conventional Eulerian-based velocity nor a Lagrangian-based probability density function may be estimated.
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2014 CFR
2014-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
40 CFR Appendix C to Part 191 - Guidance for Implementation of Subpart B
Code of Federal Regulations, 2011 CFR
2011-07-01
... that the remaining probability distribution of cumulative releases would not be significantly changed... with § 191.13 into a “complementary cumulative distribution function” that indicates the probability of... distribution function for each disposal system considered. The Agency assumes that a disposal system can be...
Probability of stress-corrosion fracture under random loading.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.
Resonances in the cumulative reaction probability for a model electronically nonadiabatic reaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, J.; Bowman, J.M.
1996-05-01
The cumulative reaction probability, flux{endash}flux correlation function, and rate constant are calculated for a model, two-state, electronically nonadiabatic reaction, given by Shin and Light [S. Shin and J. C. Light, J. Chem. Phys. {bold 101}, 2836 (1994)]. We apply straightforward generalizations of the flux matrix/absorbing boundary condition approach of Miller and co-workers to obtain these quantities. The upper adiabatic electronic potential supports bound states, and these manifest themselves as {open_quote}{open_quote}recrossing{close_quote}{close_quote} resonances in the cumulative reaction probability, at total energies above the barrier to reaction on the lower adiabatic potential. At energies below the barrier, the cumulative reaction probability for themore » coupled system is shifted to higher energies relative to the one obtained for the ground state potential. This is due to the effect of an additional effective barrier caused by the nuclear kinetic operator acting on the ground state, adiabatic electronic wave function, as discussed earlier by Shin and Light. Calculations are reported for five sets of electronically nonadiabatic coupling parameters. {copyright} {ital 1996 American Institute of Physics.}« less
A new approach to the problem of bulk-mediated surface diffusion.
Berezhkovskii, Alexander M; Dagdug, Leonardo; Bezrukov, Sergey M
2015-08-28
This paper is devoted to bulk-mediated surface diffusion of a particle which can diffuse both on a flat surface and in the bulk layer above the surface. It is assumed that the particle is on the surface initially (at t = 0) and at time t, while in between it may escape from the surface and come back any number of times. We propose a new approach to the problem, which reduces its solution to that of a two-state problem of the particle transitions between the surface and the bulk layer, focusing on the cumulative residence times spent by the particle in the two states. These times are random variables, the sum of which is equal to the total observation time t. The advantage of the proposed approach is that it allows for a simple exact analytical solution for the double Laplace transform of the conditional probability density of the cumulative residence time spent on the surface by the particle observed for time t. This solution is used to find the Laplace transform of the particle mean square displacement and to analyze the peculiarities of its time behavior over the entire range of time. We also establish a relation between the double Laplace transform of the conditional probability density and the Fourier-Laplace transform of the particle propagator over the surface. The proposed approach treats the cases of both finite and infinite bulk layer thicknesses (where bulk-mediated surface diffusion is normal and anomalous at asymptotically long times, respectively) on equal footing.
NASA Astrophysics Data System (ADS)
Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.
2017-10-01
We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.
Cumulative probability of neodymium: YAG laser posterior capsulotomy after phacoemulsification.
Ando, Hiroshi; Ando, Nobuyo; Oshika, Tetsuro
2003-11-01
To retrospectively analyze the cumulative probability of neodymium:YAG (Nd:YAG) laser posterior capsulotomy after phacoemulsification and to evaluate the risk factors. Ando Eye Clinic, Kanagawa, Japan. In 3997 eyes that had phacoemulsification with an intact continuous curvilinear capsulorhexis, the cumulative probability of posterior capsulotomy was computed by Kaplan-Meier survival analysis and risk factors were analyzed using the Cox proportional hazards regression model. The variables tested were sex; age; type of cataract; preoperative best corrected visual acuity (BCVA); presence of diabetes mellitus, diabetic retinopathy, or retinitis pigmentosa; type of intraocular lens (IOL); and the year the operation was performed. The IOLs were categorized as 3-piece poly(methyl methacrylate) (PMMA), 1-piece PMMA, 3-piece silicone, and acrylic foldable. The cumulative probability of capsulotomy after cataract surgery was 1.95%, 18.50%, and 32.70% at 1, 3, and 5 years, respectively. Positive risk factors included a better preoperative BCVA (P =.0005; risk ratio [RR], 1.7; 95% confidence interval [CI], 1.3-2.5) and the presence of retinitis pigmentosa (P<.0001; RR, 6.6; 95% CI, 3.7-11.6). Women had a significantly greater probability of Nd:YAG laser posterior capsulotomy (P =.016; RR, 1.4; 95% CI, 1.1-1.8). The type of IOL was significantly related to the probability of Nd:YAG laser capsulotomy, with the foldable acrylic IOL having a significantly lower probability of capsulotomy. The 1-piece PMMA IOL had a significantly higher risk than 3-piece PMMA and 3-piece silicone IOLs. The probability of Nd:YAG laser capsulotomy was higher in women, in eyes with a better preoperative BCVA, and in patients with retinitis pigmentosa. The foldable acrylic IOL had a significantly lower probability of capsulotomy.
Brownlow, Janeese A; Zitnik, Gerard A; McLean, Carmen P; Gehrman, Philip R
2018-05-08
There is increasing recognition that traumatic stress encountered throughout life, including those prior to military service, can put individuals at increased risk for developing Posttraumatic Stress Disorder (PTSD). The purpose of this study was to examine the association of both traumatic stress encountered during deployment, and traumatic stress over one's lifetime on probable PTSD diagnosis. Probable PTSD diagnosis was compared between military personnel deployed in Operation Iraqi Freedom/Operation Enduring Freedom (OIF/OEF; N = 21,499) and those who have recently enlisted (N = 55,814), using data obtained from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Probable PTSD diagnosis was assessed using the PTSD Checklist. The effect of exposure to multiple types (i.e. diversity) of traumatic stress and the total quantity (i.e. cumulative) of traumatic stress on probable PTSD diagnosis was also compared. Military personnel who had been deployed experienced higher rates of PTSD symptoms than new soldiers. Diversity of lifetime traumatic stress predicted probable PTSD diagnosis in both groups, whereas cumulative lifetime traumatic stress only predicted probable PTSD for those who had been deployed. For deployed soldiers, having been exposed to various types of traumatic stress during deployment predicted probable PTSD diagnosis, but cumulative deployment-related traumatic stress did not. Similarly, the total quantity of traumatic stress (i.e. cumulative lifetime traumatic stress) did not predict probable PTSD diagnosis among new soldiers. Together, traumatic stress over one's lifetime is a predictor of probable PTSD for veterans, as much as traumatic stress encountered during war. Clinicians treating military personnel with PTSD should be aware of the impact of traumatic stress beyond what occurs during war. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Boyce, L.
1992-01-01
A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.
Design for cyclic loading endurance of composites
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.
1993-01-01
The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.
Launch pad lightning protection effectiveness
NASA Technical Reports Server (NTRS)
Stahmann, James R.
1991-01-01
Using the striking distance theory that lightning leaders will strike the nearest grounded point on their last jump to earth corresponding to the striking distance, the probability of striking a point on a structure in the presence of other points can be estimated. The lightning strokes are divided into deciles having an average peak current and striking distance. The striking distances are used as radii from the points to generate windows of approach through which the leader must pass to reach a designated point. The projections of the windows on a horizontal plane as they are rotated through all possible angles of approach define an area that can be multiplied by the decile stroke density to arrive at the probability of strokes with the window average striking distance. The sum of all decile probabilities gives the cumulative probability for all strokes. The techniques can be applied to NASA-Kennedy launch pad structures to estimate the lightning protection effectiveness for the crane, gaseous oxygen vent arm, and other points. Streamers from sharp points on the structure provide protection for surfaces having large radii of curvature. The effects of nearby structures can also be estimated.
Martian Cratering 7: The Role of Impact Gardening
NASA Astrophysics Data System (ADS)
Hartmann, William K.; Anguita, Jorge; de la Casa, Miguel A.; Berman, Daniel C.; Ryan, Eileen V.
2001-01-01
Viking-era researchers concluded that impact craters of diameter D<50 m were absent on Mars, and thus impact gardening was considered negligible in establishing decameter-scale surface properties. This paper documents martian crater populations down to diameter D˜11 m and probably less on Mars, requiring a certain degree of impact gardening. Applying lunar data, we calculate cumulative gardening depth as a function of total cratering. Stratigraphic units exposed since Noachian times would have experienced tens to hundreds of meters of gardening. Early Amazonian/late Hesperian sites, such as the first three landing sites, experienced cumulative gardening on the order of 3-14 m, a conclusion that may conflict with some landing site interpretations. Martian surfaces with less than a percent or so of lunar mare crater densities have negligible impact gardening because of a probable cutoff of hypervelocity impact cratering below D˜1 m, due to Mars' atmosphere. Unlike lunar regolith, martian regolith has been affected, and fines removed, by many processes. Deflation may have been a factor in leaving widespread boulder fields and associated dune fields, observed by the first three landers. Ancient regolith provided a porous medium for water storage, subsurface transport, and massive permafrost formation. Older regolith was probably cemented by evaporites and permafrost, may contain interbedded sediments and lavas, and may have been brecciated by later impacts. Growing evidence suggests recent water mobility, and the existence of duricrust at Viking and Pathfinder sites demonstrates the cementing process. These results affect lander/rover searches for intact ancient deposits. The upper tens of meters of exposed Noachian units cannot survive today in a pristine state. Intact Noachian deposits might best be found in cliffside strata, or in recently exhumed regions. The hematite-rich areas found in Terra Meridiani by the Mars Global Surveyor are probably examples of the latter.
NASA Astrophysics Data System (ADS)
Kim, Hannah; Hong, Helen
2014-03-01
We propose an automatic method for nipple detection on 3D automated breast ultrasound (3D ABUS) images using coronal slab-average-projection and cumulative probability map. First, to identify coronal images that appeared remarkable distinction between nipple-areola region and skin, skewness of each coronal image is measured and the negatively skewed images are selected. Then, coronal slab-average-projection image is reformatted from selected images. Second, to localize nipple-areola region, elliptical ROI covering nipple-areola region is detected using Hough ellipse transform in coronal slab-average-projection image. Finally, to separate the nipple from areola region, 3D Otsu's thresholding is applied to the elliptical ROI and cumulative probability map in the elliptical ROI is generated by assigning high probability to low intensity region. False detected small components are eliminated using morphological opening and the center point of detected nipple region is calculated. Experimental results show that our method provides 94.4% nipple detection rate.
CUMPOIS- CUMULATIVE POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The Cumulative Poisson distribution program, CUMPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), can be used independently of one another. CUMPOIS determines the approximate cumulative binomial distribution, evaluates the cumulative distribution function (cdf) for gamma distributions with integer shape parameters, and evaluates the cdf for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. CUMPOIS calculates the probability that n or less events (ie. cumulative) will occur within any unit when the expected number of events is given as lambda. Normally, this probability is calculated by a direct summation, from i=0 to n, of terms involving the exponential function, lambda, and inverse factorials. This approach, however, eventually fails due to underflow for sufficiently large values of n. Additionally, when the exponential term is moved outside of the summation for simplification purposes, there is a risk that the terms remaining within the summation, and the summation itself, will overflow for certain values of i and lambda. CUMPOIS eliminates these possibilities by multiplying an additional exponential factor into the summation terms and the partial sum whenever overflow/underflow situations threaten. The reciprocal of this term is then multiplied into the completed sum giving the cumulative probability. The CUMPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting lambda and n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMPOIS was developed in 1988.
Stochastic Models of Emerging Infectious Disease Transmission on Adaptive Random Networks
Pipatsart, Navavat; Triampo, Wannapong
2017-01-01
We presented adaptive random network models to describe human behavioral change during epidemics and performed stochastic simulations of SIR (susceptible-infectious-recovered) epidemic models on adaptive random networks. The interplay between infectious disease dynamics and network adaptation dynamics was investigated in regard to the disease transmission and the cumulative number of infection cases. We found that the cumulative case was reduced and associated with an increasing network adaptation probability but was increased with an increasing disease transmission probability. It was found that the topological changes of the adaptive random networks were able to reduce the cumulative number of infections and also to delay the epidemic peak. Our results also suggest the existence of a critical value for the ratio of disease transmission and adaptation probabilities below which the epidemic cannot occur. PMID:29075314
Cumulative Poisson Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.; Nolty, Robert
1990-01-01
Overflow and underflow in sums prevented. Cumulative Poisson Distribution Program, CUMPOIS, one of two computer programs that make calculations involving cumulative Poisson distributions. Both programs, CUMPOIS (NPO-17714) and NEWTPOIS (NPO-17715), used independently of one another. CUMPOIS determines cumulative Poisson distribution, used to evaluate cumulative distribution function (cdf) for gamma distributions with integer shape parameters and cdf for X (sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Written in C.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Rourke, Patrick Francis
The purpose of this report is to provide the reader with an understanding of how a Monte Carlo neutron transport code was written, developed, and evolved to calculate the probability distribution functions (PDFs) and their moments for the neutron number at a final time as well as the cumulative fission number, along with introducing several basic Monte Carlo concepts.
Estimating risk and rate levels, ratios and differences in case-control studies.
King, Gary; Zeng, Langche
2002-05-30
Classic (or 'cumulative') case-control sampling designs do not admit inferences about quantities of interest other than risk ratios, and then only by making the rare events assumption. Probabilities, risk differences and other quantities cannot be computed without knowledge of the population incidence fraction. Similarly, density (or 'risk set') case-control sampling designs do not allow inferences about quantities other than the rate ratio. Rates, rate differences, cumulative rates, risks, and other quantities cannot be estimated unless auxiliary information about the underlying cohort such as the number of controls in each full risk set is available. Most scholars who have considered the issue recommend reporting more than just risk and rate ratios, but auxiliary population information needed to do this is not usually available. We address this problem by developing methods that allow valid inferences about all relevant quantities of interest from either type of case-control study when completely ignorant of or only partially knowledgeable about relevant auxiliary population information.
Hydrodynamic Flow Fluctuations in √sNN = 5:02 TeV PbPbCollisions
NASA Astrophysics Data System (ADS)
Castle, James R.
The collective, anisotropic expansion of the medium created in ultrarelativistic heavy-ion collisions, known as flow, is characterized through a Fourier expansion of the final-state azimuthal particle density. In the Fourier expansion, flow harmonic coefficients vn correspond to shape components in the final-state particle density, which are a consequence of similar spatial anisotropies in the initial-state transverse energy density of a collision. Flow harmonic fluctuations are studied for PbPb collisions at √sNN = 5.02 TeV using the CMS detector at the CERN LHC. Flow harmonic probability distributions p( vn) are obtained using particles with 0.3 < pT < 3.0 GeV/c and ∥eta∥ < 1.0 by removing finite-multiplicity resolution effects from the observed azimuthal particle density through an unfolding procedure. Cumulant elliptic flow harmonics (n = 2) are determined from the moments of the unfolded p(v2) distributions and used to construct observables in 5% wide centrality bins up to 60% that relate to the initial-state spatial anisotropy. Hydrodynamic models predict that fluctuations in the initial-state transverse energy density will lead to a non-Gaussian component in the elliptic flow probability distributions that manifests as a negative skewness. A statistically significant negative skewness is observed for all centrality bins as evidenced by a splitting between the higher-order cumulant elliptic flow harmonics. The unfolded p (v2) distributions are transformed assuming a linear relationship between the initial-state spatial anisotropy and final-state flow and are fitted with elliptic power law and Bessel Gaussian parametrizations to infer information on the nature of initial-state fluctuations. The elliptic power law parametrization is found to provide a more accurate description of the fluctuations than the Bessel-Gaussian parametrization. In addition, the event-shape engineering technique, where events are further divided into classes based on an observed ellipticity, is used to study fluctuation-driven differences in the initial-state spatial anisotropy for a given collision centrality that would otherwise be destroyed by event-averaging techniques. Correlations between the first and second moments of p( vn) distributions and event ellipticity are measured for harmonic orders n = 2 - 4 by coupling event-shape engineering to the unfolding technique.
Multivariate η-μ fading distribution with arbitrary correlation model
NASA Astrophysics Data System (ADS)
Ghareeb, Ibrahim; Atiani, Amani
2018-03-01
An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied.
Subramanian, Sundarraman
2008-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423
Subramanian, Sundarraman
2006-01-01
This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.
Analysis of computed tomography density of liver before and after amiodarone administration.
Matsuda, Masazumi; Otaka, Aoi; Tozawa, Tomoki; Asano, Tomoyuki; Ishiyama, Koichi; Hashimoto, Manabu
2018-05-01
To evaluate CT density of liver changes between before and after amiodarone administration. Twenty-five patients underwent non-enhanced CT including the liver before and after amiodarone administration. We set regions of interest (ROIs) at liver S8, spleen, paraspinal muscle, and calculated average CT density in these ROIs, then compared CT density between liver and other organs. Statistical differences between CT density of liver and various ratios before and after administration were determined, along with correlations between cumulative dose of amiodarone and liver density after administration, density change of liver, and various ratios after administration. Liver density, liver-to-spleen ratio, and liver-to-paraspinal muscle ratio differed significantly between before and after amiodarone administration. No significant correlations were found between cumulative doses of amiodarone and any of liver density after administration, density change of liver, or various ratios after administration. CT density of liver after amiodarone administration was significantly higher than that before administration. No correlations were identified between cumulative dose of amiodarone and either liver density after administration or density change of liver. Amiodarone usage should be checked when radiologists identify high density of the liver on CT.
NASA Astrophysics Data System (ADS)
Reimberg, Paulo; Bernardeau, Francis
2018-01-01
We present a formalism based on the large deviation principle (LDP) applied to cosmological density fields, and more specifically to the arbitrary functional of density profiles, and we apply it to the derivation of the cumulant generating function and one-point probability distribution function (PDF) of the aperture mass (Map ), a common observable for cosmic shear observations. We show that the LDP can indeed be used in practice for a much larger family of observables than previously envisioned, such as those built from continuous and nonlinear functionals of density profiles. Taking advantage of this formalism, we can extend previous results, which were based on crude definitions of the aperture mass, with top-hat windows and the use of the reduced shear approximation (replacing the reduced shear with the shear itself). We were precisely able to quantify how this latter approximation affects the Map statistical properties. In particular, we derive the corrective term for the skewness of the Map and reconstruct its one-point PDF.
An empirical analysis of the Ebola outbreak in West Africa
NASA Astrophysics Data System (ADS)
Khaleque, Abdul; Sen, Parongama
2017-02-01
The data for the Ebola outbreak that occurred in 2014-2016 in three countries of West Africa are analysed within a common framework. The analysis is made using the results of an agent based Susceptible-Infected-Removed (SIR) model on a Euclidean network, where nodes at a distance l are connected with probability P(l) ∝ l-δ, δ determining the range of the interaction, in addition to nearest neighbors. The cumulative (total) density of infected population here has the form , where the parameters depend on δ and the infection probability q. This form is seen to fit well with the data. Using the best fitting parameters, the time at which the peak is reached is estimated and is shown to be consistent with the data. We also show that in the Euclidean model, one can choose δ and q values which reproduce the data for the three countries qualitatively. These choices are correlated with population density, control schemes and other factors. Comparing the real data and the results from the model one can also estimate the size of the actual population susceptible to the disease. Rescaling the real data a reasonably good quantitative agreement with the simulation results is obtained.
Long-term consistency in spatial patterns of primate seed dispersal.
Heymann, Eckhard W; Culot, Laurence; Knogge, Christoph; Noriega Piña, Tony Enrique; Tirado Herrera, Emérita R; Klapproth, Matthias; Zinner, Dietmar
2017-03-01
Seed dispersal is a key ecological process in tropical forests, with effects on various levels ranging from plant reproductive success to the carbon storage potential of tropical rainforests. On a local and landscape scale, spatial patterns of seed dispersal create the template for the recruitment process and thus influence the population dynamics of plant species. The strength of this influence will depend on the long-term consistency of spatial patterns of seed dispersal. We examined the long-term consistency of spatial patterns of seed dispersal with spatially explicit data on seed dispersal by two neotropical primate species, Leontocebus nigrifrons and Saguinus mystax (Callitrichidae), collected during four independent studies between 1994 and 2013. Using distributions of dispersal probability over distances independent of plant species, cumulative dispersal distances, and kernel density estimates, we show that spatial patterns of seed dispersal are highly consistent over time. For a specific plant species, the legume Parkia panurensis , the convergence of cumulative distributions at a distance of 300 m, and the high probability of dispersal within 100 m from source trees coincide with the dimension of the spatial-genetic structure on the embryo/juvenile (300 m) and adult stage (100 m), respectively, of this plant species. Our results are the first demonstration of long-term consistency of spatial patterns of seed dispersal created by tropical frugivores. Such consistency may translate into idiosyncratic patterns of regeneration.
Newton/Poisson-Distribution Program
NASA Technical Reports Server (NTRS)
Bowerman, Paul N.; Scheuer, Ernest M.
1990-01-01
NEWTPOIS, one of two computer programs making calculations involving cumulative Poisson distributions. NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714) used independently of one another. NEWTPOIS determines Poisson parameter for given cumulative probability, from which one obtains percentiles for gamma distributions with integer shape parameters and percentiles for X(sup2) distributions with even degrees of freedom. Used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. Program written in C.
Divergence of perturbation theory in large scale structures
NASA Astrophysics Data System (ADS)
Pajer, Enrico; van der Woude, Drian
2018-05-01
We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.
Seasonal and cumulative loblolly pine development under two stand density and fertility levels
James D. Haywood
1992-01-01
An 8 year-old loblolly pine (Pinus taeda L.) stand was subjected to two cultural treatments for examination of seasonal and cumulative pine development. In the first treatment, pine density was either reduced by removal cutting to 2% trees per acre, at a 12- by 124 spacing, or left uncut with an original density of 1,210 trees per acre at a 6- by 6-...
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
Probability and Statistics in Sensor Performance Modeling
2010-12-01
language software program is called Environmental Awareness for Sensor and Emitter Employment. Some important numerical issues in the implementation...3 Statistical analysis for measuring sensor performance...complementary cumulative distribution function cdf cumulative distribution function DST decision-support tool EASEE Environmental Awareness of
Ruggeri, Annalisa; Labopin, Myriam; Sormani, Maria Pia; Sanz, Guillermo; Sanz, Jaime; Volt, Fernanda; Michel, Gerard; Locatelli, Franco; Diaz De Heredia, Cristina; O'Brien, Tracey; Arcese, William; Iori, Anna Paola; Querol, Sergi; Kogler, Gesine; Lecchi, Lucilla; Pouthier, Fabienne; Garnier, Federico; Navarrete, Cristina; Baudoux, Etienne; Fernandes, Juliana; Kenzey, Chantal; Eapen, Mary; Gluckman, Eliane; Rocha, Vanderson; Saccardi, Riccardo
2014-09-01
Umbilical cord blood transplant recipients are exposed to an increased risk of graft failure, a complication leading to a higher rate of transplant-related mortality. The decision and timing to offer a second transplant after graft failure is challenging. With the aim of addressing this issue, we analyzed engraftment kinetics and outcomes of 1268 patients (73% children) with acute leukemia (64% acute lymphoblastic leukemia, 36% acute myeloid leukemia) in remission who underwent single-unit umbilical cord blood transplantation after a myeloablative conditioning regimen. The median follow-up was 31 months. The overall survival rate at 3 years was 47%; the 100-day cumulative incidence of transplant-related mortality was 16%. Longer time to engraftment was associated with increased transplant-related mortality and shorter overall survival. The cumulative incidence of neutrophil engraftment at day 60 was 86%, while the median time to achieve engraftment was 24 days. Probability density analysis showed that the likelihood of engraftment after umbilical cord blood transplantation increased after day 10, peaked on day 21 and slowly decreased to 21% by day 31. Beyond day 31, the probability of engraftment dropped rapidly, and the residual probability of engrafting after day 42 was 5%. Graft failure was reported in 166 patients, and 66 of them received a second graft (allogeneic, n=45). Rescue actions, such as the search for another graft, should be considered starting after day 21. A diagnosis of graft failure can be established in patients who have not achieved neutrophil recovery by day 42. Moreover, subsequent transplants should not be postponed after day 42. Copyright© Ferrata Storti Foundation.
Cumulants, free cumulants and half-shuffles
Ebrahimi-Fard, Kurusch; Patras, Frédéric
2015-01-01
Free cumulants were introduced as the proper analogue of classical cumulants in the theory of free probability. There is a mix of similarities and differences, when one considers the two families of cumulants. Whereas the combinatorics of classical cumulants is well expressed in terms of set partitions, that of free cumulants is described and often introduced in terms of non-crossing set partitions. The formal series approach to classical and free cumulants also largely differs. The purpose of this study is to put forward a different approach to these phenomena. Namely, we show that cumulants, whether classical or free, can be understood in terms of the algebra and combinatorics underlying commutative as well as non-commutative (half-)shuffles and (half-) unshuffles. As a corollary, cumulants and free cumulants can be characterized through linear fixed point equations. We study the exponential solutions of these linear fixed point equations, which display well the commutative, respectively non-commutative, character of classical and free cumulants. PMID:27547078
Reibnegger, Gilbert; Caluba, Hans-Christian; Ithaler, Daniel; Manhal, Simone; Neges, Heide Maria; Smolle, Josef
2011-08-01
Admission to medical studies in Austria since academic year 2005-2006 has been regulated by admission tests. At the Medical University of Graz, an admission test focusing on secondary-school-level knowledge in natural sciences has been used for this purpose. The impact of this important change on dropout rates of female versus male students and older versus younger students is reported. All 2,860 students admitted to the human medicine diploma program at the Medical University of Graz from academic years 2002-2003 to 2008-2009 were included. Nonparametric and semiparametric survival analysis techniques were employed to compare cumulative probability of dropout between demographic groups. Cumulative probability of dropout was significantly reduced in students selected by active admission procedure versus those admitted openly (P < .0001). Relative hazard ratio of selected versus openly admitted students was only 0.145 (95% CI, 0.106-0.198). Among openly admitted students, but not for selected ones, the cumulative probabilities for dropout were higher for females (P < .0001) and for older students (P < .0001). Generally, dropout hazard is highest during the second year of study. The introduction of admission testing significantly decreased the cumulative probability for dropout. In openly admitted students a significantly higher risk for dropout was found in female students and in older students, whereas no such effects can be detected after admission testing. Future research should focus on the sex dependence, with the aim of improving success rates among female applicants on the admission tests.
Steiner, Markus J.; Lopez, Laureen M.; Grimes, David A.; Cheng, Linan; Shelton, Jim; Trussell, James; Farley, Timothy M.M.; Dorflinger, Laneta
2013-01-01
Background Sino-implant (II) is a subdermal contraceptive implant manufactured in China. This two-rod levonorgestrel-releasing implant has the same amount of active ingredient (150 mg levonorgestrel) and mechanism of action as the widely available contraceptive implant Jadelle. We examined randomized controlled trials of Sino-implant (II) for effectiveness and side effects. Study design We searched electronic databases for studies of Sino-implant (II), and then restricted our review to randomized controlled trials. The primary outcome of this review was pregnancy. Results Four randomized trials with a total of 15,943 women assigned to Sino-implant (II) had first-year probabilities of pregnancy ranging from 0.0% to 0.1%. Cumulative probabilities of pregnancy during the four years of the product's approved duration of use were 0.9% and 1.06% in the two trials that presented date for four-year use. Five-year cumulative probabilities of pregnancy ranged from 0.7% to 2.1%. In one trial, the cumulative probability of pregnancy more than doubled during the fifth year (from 0.9% to 2.1%), which may be why the implant is approved for four years of use in China. Five-year cumulative probabilities of discontinuation due to menstrual problems ranged from 12.5% to 15.5% for Sino-implant (II). Conclusions Sino-implant (II) is one of the most effective contraceptives available today. These available clinical data, combined with independent laboratory testing, and the knowledge that 7 million women have used this method since 1994, support the safety and effectiveness of Sino-implant (II). The lower cost of Sino-implant (II) compared with other subdermal implants could improve access to implants in resource-constrained settings. PMID:20159174
About the cumulants of periodic signals
NASA Astrophysics Data System (ADS)
Barrau, Axel; El Badaoui, Mohammed
2018-01-01
This note studies cumulants of time series. These functions originating from the probability theory being commonly used as features of deterministic signals, their classical properties are examined in this modified framework. We show additivity of cumulants, ensured in the case of independent random variables, requires here a different hypothesis. Practical applications are proposed, in particular an analysis of the failure of the JADE algorithm to separate some specific periodic signals.
Exact probability distribution function for the volatility of cumulative production
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Klümper, Andreas
2018-04-01
In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
Topology of two-dimensional turbulent flows of dust and gas
NASA Astrophysics Data System (ADS)
Mitra, Dhrubaditya; Perlekar, Prasad
2018-04-01
We perform direct numerical simulations (DNS) of passive heavy inertial particles (dust) in homogeneous and isotropic two-dimensional turbulent flows (gas) for a range of Stokes number, St<1 . We solve for the particles using both a Lagrangian and an Eulerian approach (with a shock-capturing scheme). In the latter, the particles are described by a dust-density field and a dust-velocity field. We find the following: the dust-density field in our Eulerian simulations has the same correlation dimension d2 as obtained from the clustering of particles in the Lagrangian simulations for St<1 ; the cumulative probability distribution function of the dust density coarse grained over a scale r , in the inertial range, has a left tail with a power-law falloff indicating the presence of voids; the energy spectrum of the dust velocity has a power-law range with an exponent that is the same as the gas-velocity spectrum except at very high Fourier modes; the compressibility of the dust-velocity field is proportional to St2. We quantify the topological properties of the dust velocity and the gas velocity through their gradient matrices, called A and B , respectively. Our DNS confirms that the statistics of topological properties of B are the same in Eulerian and Lagrangian frames only if the Eulerian data are weighed by the dust density. We use this correspondence to study the statistics of topological properties of A in the Lagrangian frame from our Eulerian simulations by calculating density-weighted probability distribution functions. We further find that in the Lagrangian frame, the mean value of the trace of A is negative and its magnitude increases with St approximately as exp(-C /St) with a constant C ≈0.1 . The statistical distribution of different topological structures that appear in the dust flow is different in Eulerian and Lagrangian (density-weighted Eulerian) cases, particularly for St close to unity. In both of these cases, for small St the topological structures have close to zero divergence and are either vortical (elliptic) or strain dominated (hyperbolic, saddle). As St increases, the contribution to negative divergence comes mostly from saddles and the contribution to positive divergence comes from both vortices and saddles. Compared to the Eulerian case, the Lagrangian (density-weighted Eulerian) case has less outward spirals and more converging saddles. Inward spirals are the least probable topological structures in both cases.
Space Shuttle Launch Probability Analysis: Understanding History so We Can Predict the Future
NASA Technical Reports Server (NTRS)
Cates, Grant R.
2014-01-01
The Space Shuttle was launched 135 times and nearly half of those launches required 2 or more launch attempts. The Space Shuttle launch countdown historical data of 250 launch attempts provides a wealth of data that is important to analyze for strictly historical purposes as well as for use in predicting future launch vehicle launch countdown performance. This paper provides a statistical analysis of all Space Shuttle launch attempts including the empirical probability of launch on any given attempt and the cumulative probability of launch relative to the planned launch date at the start of the initial launch countdown. This information can be used to facilitate launch probability predictions of future launch vehicles such as NASA's Space Shuttle derived SLS. Understanding the cumulative probability of launch is particularly important for missions to Mars since the launch opportunities are relatively short in duration and one must wait for 2 years before a subsequent attempt can begin.
Probabilistic Simulation of Stress Concentration in Composite Laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, D. G.
1994-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors (SCF's) in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties, whereas the finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate SCF's, such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using is to simulate the SCF's in three different composite laminates. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the SCF's are influenced by local stiffness variables, by load eccentricities, and by initial stress fields.
Monte Carlo simulation of wave sensing with a short pulse radar
NASA Technical Reports Server (NTRS)
Levine, D. M.; Davisson, L. D.; Kutz, R. L.
1977-01-01
A Monte Carlo simulation is used to study the ocean wave sensing potential of a radar which scatters short pulses at small off-nadir angles. In the simulation, realizations of a random surface are created commensurate with an assigned probability density and power spectrum. Then the signal scattered back to the radar is computed for each realization using a physical optics analysis which takes wavefront curvature and finite radar-to-surface distance into account. In the case of a Pierson-Moskowitz spectrum and a normally distributed surface, reasonable assumptions for a fully developed sea, it has been found that the cumulative distribution of time intervals between peaks in the scattered power provides a measure of surface roughness. This observation is supported by experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-02-01
The following are described: the proposed action; existing environment; probable impacts, direct and indirect; probable cumulative and long-term environmental impacts; accidents; coordination with federal, state, and local agencies; and alternatives. (MHR)
Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.
Venturi, D; Karniadakis, G E
2014-06-08
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.
NASA Technical Reports Server (NTRS)
Nagpal, Vinod K.; Tong, Michael; Murthy, P. L. N.; Mital, Subodh
1998-01-01
An integrated probabilistic approach has been developed to assess composites for high temperature applications. This approach was used to determine thermal and mechanical properties and their probabilistic distributions of a 5-harness 0/90 Sylramic fiber/CVI-SiC/Mi-SiC woven Ceramic Matrix Composite (CMC) at high temperatures. The purpose of developing this approach was to generate quantitative probabilistic information on this CMC to help complete the evaluation for its potential application for HSCT combustor liner. This approach quantified the influences of uncertainties inherent in constituent properties called primitive variables on selected key response variables of the CMC at 2200 F. The quantitative information is presented in the form of Cumulative Density Functions (CDFs). Probability Density Functions (PDFS) and primitive variable sensitivities on response. Results indicate that the scatters in response variables were reduced by 30-50% when the uncertainties in the primitive variables, which showed the most influence, were reduced by 50%.
Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems
Venturi, D.; Karniadakis, G. E.
2014-01-01
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519
NASA Astrophysics Data System (ADS)
Fredj, Erick; Kohut, Josh; Roarty, Hugh; Lai, Jian-Wu
2017-04-01
The Lagrangian separation distance between the endpoints of simulated and observed drifter trajectories is often used to assess the performance of numerical particle trajectory models. However, the separation distance fails to indicate relative model performance in weak and strong current regions, such as over continental shelves and the adjacent deep ocean. A skill score described in detail by (Lui et.al. 2011) was applied to estimate the cumulative Lagrangian separation distances normalized by the associated cumulative trajectory lengths. In contrast, the Lagrangian separation distance alone gives a misleading result. The proposed dimensionless skill score is particularly useful when the number of drifter trajectories is limited and neither a conventional Eulerian-based velocity nor a Lagrangian based probability density function may be estimated. The skill score assesses The Taiwan Ocean Radar Observing System (TOROS) performance. TOROS consists of 17 SeaSonde type radars around the Taiwan Island. The currents off Taiwan are significantly influenced by the nearby Kuroshio current. The main stream of the Kuroshio flows along the east coast of Taiwan to the north throughout the year. Sometimes its branch current also bypasses the south end of Taiwan and goes north along the west coast of Taiwan. The Kuroshio is also prone to seasonal change in its speed of flow, current capacity, distribution width, and depth. The evaluations of HF-Radar National Taiwanese network performance using Lagrangian drifter records demonstrated the high quality and robustness of TOROS HF-Radar data using a purely trajectory-based non-dimensional index. Yonggang Liu and Robert H. Weisberg, "Evaluation of trajectory modeling in different dynamic regions using normalized cumulative Lagrangian separation", Journal of Geophysical Research, Vol. 116, C09013, doi:10.1029/2010JC006837, 2011
NASA Astrophysics Data System (ADS)
Zorila, Alexandru; Stratan, Aurel; Nemes, George
2018-01-01
We compare the ISO-recommended (the standard) data-reduction algorithm used to determine the surface laser-induced damage threshold of optical materials by the S-on-1 test with two newly suggested algorithms, both named "cumulative" algorithms/methods, a regular one and a limit-case one, intended to perform in some respects better than the standard one. To avoid additional errors due to real experiments, a simulated test is performed, named the reverse approach. This approach simulates the real damage experiments, by generating artificial test-data of damaged and non-damaged sites, based on an assumed, known damage threshold fluence of the target and on a given probability distribution function to induce the damage. In this work, a database of 12 sets of test-data containing both damaged and non-damaged sites was generated by using four different reverse techniques and by assuming three specific damage probability distribution functions. The same value for the threshold fluence was assumed, and a Gaussian fluence distribution on each irradiated site was considered, as usual for the S-on-1 test. Each of the test-data was independently processed by the standard and by the two cumulative data-reduction algorithms, the resulting fitted probability distributions were compared with the initially assumed probability distribution functions, and the quantities used to compare these algorithms were determined. These quantities characterize the accuracy and the precision in determining the damage threshold and the goodness of fit of the damage probability curves. The results indicate that the accuracy in determining the absolute damage threshold is best for the ISO-recommended method, the precision is best for the limit-case of the cumulative method, and the goodness of fit estimator (adjusted R-squared) is almost the same for all three algorithms.
Pregnancy after tubal sterilization with silicone rubber band and spring clip application.
Peterson, H B; Xia, Z; Wilcox, L S; Tylor, L R; Trussell, J
2001-02-01
To determine risk factors for pregnancy after tubal sterilization with silicone rubber bands or spring clips. A total of 3329 women sterilized using silicone rubber bands and 1595 women sterilized using spring clips were followed for up to 14 years as part of a prospective cohort study conducted in medical centers in nine US cities. We assessed the risk of pregnancy by cumulative life-table probabilities and proportional hazards analysis. The risk of pregnancy for women who had silicone rubber band application differed by location of band application and study site. The 10-year cumulative probabilities of pregnancy varied from a low of 0.0 per 1000 procedures at one study site to a high of 42.5 per 1000 procedures in the four combined sites in which fewer than 100 procedures per site were performed. The risk of pregnancy for women who had spring clip application varied by location of clip application, study site, race or ethnicity, tubal disease, and history of abdominal or pelvic surgery. The probabilities across study sites ranged from 7.1 per 1000 procedures at 10 years to 78.0 per 1000 procedures at 5 years (follow-up was limited to 5 years at that site). The 10-year cumulative probability of pregnancy after silicone rubber band and spring clip application is low but varies substantially by both clinical and demographic characteristics.
Probabilistic Simulation of Progressive Fracture in Bolted-Joint Composite Laminates
NASA Technical Reports Server (NTRS)
Minnetyan, L.; Singhal, S. N.; Chamis, C. C.
1996-01-01
This report describes computational methods to probabilistically simulate fracture in bolted composite structures. An innovative approach that is independent of stress intensity factors and fracture toughness was used to simulate progressive fracture. The effect of design variable uncertainties on structural damage was also quantified. A fast probability integrator assessed the scatter in the composite structure response before and after damage. Then the sensitivity of the response to design variables was computed. General-purpose methods, which are applicable to bolted joints in all types of structures and in all fracture processes-from damage initiation to unstable propagation and global structure collapse-were used. These methods were demonstrated for a bolted joint of a polymer matrix composite panel under edge loads. The effects of the fabrication process were included in the simulation of damage in the bolted panel. Results showed that the most effective way to reduce end displacement at fracture is to control both the load and the ply thickness. The cumulative probability for longitudinal stress in all plies was most sensitive to the load; in the 0 deg. plies it was very sensitive to ply thickness. The cumulative probability for transverse stress was most sensitive to the matrix coefficient of thermal expansion. In addition, fiber volume ratio and fiber transverse modulus both contributed significantly to the cumulative probability for the transverse stresses in all the plies.
Crystal fractionation in the SNC meteorites: Implications for sample selection
NASA Technical Reports Server (NTRS)
Treiman, Allan H.
1988-01-01
Almost all rock types in the SNC meteorites are cumulates, products of magma differentiation by crystal fractionation (addition or removal of crystals). If the SNC meteorites are from the surface of Mars or near subsurface, then most of the igneous units on Mars are differentiated. Basaltic units probably experienced minor to moderate differientation, but ultrabasic units probably experienced extreme differentiation. Products of this differentiation may include Fe-rich gabbro, pyroxenite, periodotite (and thus serpentine), and possibly massive sulfides. The SNC meteorites include ten lithologies (three in EETA79001), eight of which are crystal cumulates. The other lithologies, EETA79001 A and B are subophitic basalts.
Bonofiglio, Federico; Beyersmann, Jan; Schumacher, Martin; Koller, Michael; Schwarzer, Guido
2016-09-01
Meta-analysis of a survival endpoint is typically based on the pooling of hazard ratios (HRs). If competing risks occur, the HRs may lose translation into changes of survival probability. The cumulative incidence functions (CIFs), the expected proportion of cause-specific events over time, re-connect the cause-specific hazards (CSHs) to the probability of each event type. We use CIF ratios to measure treatment effect on each event type. To retrieve information on aggregated, typically poorly reported, competing risks data, we assume constant CSHs. Next, we develop methods to pool CIF ratios across studies. The procedure computes pooled HRs alongside and checks the influence of follow-up time on the analysis. We apply the method to a medical example, showing that follow-up duration is relevant both for pooled cause-specific HRs and CIF ratios. Moreover, if all-cause hazard and follow-up time are large enough, CIF ratios may reveal additional information about the effect of treatment on the cumulative probability of each event type. Finally, to improve the usefulness of such analysis, better reporting of competing risks data is needed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Comparative analysis through probability distributions of a data set
NASA Astrophysics Data System (ADS)
Cristea, Gabriel; Constantinescu, Dan Mihai
2018-02-01
In practice, probability distributions are applied in such diverse fields as risk analysis, reliability engineering, chemical engineering, hydrology, image processing, physics, market research, business and economic research, customer support, medicine, sociology, demography etc. This article highlights important aspects of fitting probability distributions to data and applying the analysis results to make informed decisions. There are a number of statistical methods available which can help us to select the best fitting model. Some of the graphs display both input data and fitted distributions at the same time, as probability density and cumulative distribution. The goodness of fit tests can be used to determine whether a certain distribution is a good fit. The main used idea is to measure the "distance" between the data and the tested distribution, and compare that distance to some threshold values. Calculating the goodness of fit statistics also enables us to order the fitted distributions accordingly to how good they fit to data. This particular feature is very helpful for comparing the fitted models. The paper presents a comparison of most commonly used goodness of fit tests as: Kolmogorov-Smirnov, Anderson-Darling, and Chi-Squared. A large set of data is analyzed and conclusions are drawn by visualizing the data, comparing multiple fitted distributions and selecting the best model. These graphs should be viewed as an addition to the goodness of fit tests.
Statistics of Advective Stretching in Three-dimensional Incompressible Flows
NASA Astrophysics Data System (ADS)
Subramanian, Natarajan; Kellogg, Louise H.; Turcotte, Donald L.
2009-09-01
We present a method to quantify kinematic stretching in incompressible, unsteady, isoviscous, three-dimensional flows. We extend the method of Kellogg and Turcotte (J. Geophys. Res. 95:421-432, 1990) to compute the axial stretching/thinning experienced by infinitesimal ellipsoidal strain markers in arbitrary three-dimensional incompressible flows and discuss the differences between our method and the computation of Finite Time Lyapunov Exponent (FTLE). We use the cellular flow model developed in Solomon and Mezic (Nature 425:376-380, 2003) to study the statistics of stretching in a three-dimensional unsteady cellular flow. We find that the probability density function of the logarithm of normalised cumulative stretching (log S) for a globally chaotic flow, with spatially heterogeneous stretching behavior, is not Gaussian and that the coefficient of variation of the Gaussian distribution does not decrease with time as t^{-1/2} . However, it is observed that stretching becomes exponential log S˜ t and the probability density function of log S becomes Gaussian when the time dependence of the flow and its three-dimensionality are increased to make the stretching behaviour of the flow more spatially uniform. We term these behaviors weak and strong chaotic mixing respectively. We find that for strongly chaotic mixing, the coefficient of variation of the Gaussian distribution decreases with time as t^{-1/2} . This behavior is consistent with a random multiplicative stretching process.
Corwin, Dennis L.; Yemoto, Kevin; Clary, Wes; Banuelos, Gary; Skaggs, Todd H.; Lesch, Scott M.
2017-01-01
Though more costly than petroleum-based fuels and a minor component of overall military fuel sources, biofuels are nonetheless strategically valuable to the military because of intentional reliance on multiple, reliable, secure fuel sources. Significant reduction in oilseed biofuel cost occurs when grown on marginally productive saline-sodic soils plentiful in California’s San Joaquin Valley (SJV). The objective is to evaluate the feasibility of oilseed production on marginal soils in the SJV to support a 115 ML yr−1 biofuel conversion facility. The feasibility evaluation involves: (1) development of an Ida Gold mustard oilseed yield model for marginal soils; (2) identification of marginally productive soils; (3) development of a spatial database of edaphic factors influencing oilseed yield and (4) performance of Monte Carlo simulations showing potential biofuel production on marginally productive SJV soils. The model indicates oilseed yield is related to boron, salinity, leaching fraction, and water content at field capacity. Monte Carlo simulations for the entire SJV fit a shifted gamma probability density function: Q = 68.986 + gamma (6.134,5.285), where Q is biofuel production in ML yr−1. The shifted gamma cumulative density function indicates a 0.15–0.17 probability of meeting the target biofuel-production level of 115 ML yr−1, making adequate biofuel production unlikely. PMID:29036925
Elastic Backbone Defines a New Transition in the Percolation Model
NASA Astrophysics Data System (ADS)
Sampaio Filho, Cesar I. N.; Andrade, José S.; Herrmann, Hans J.; Moreira, André A.
2018-04-01
The elastic backbone is the set of all shortest paths. We found a new phase transition at peb above the classical percolation threshold at which the elastic backbone becomes dense. At this transition in 2D, its fractal dimension is 1.750 ±0.003 , and one obtains a novel set of critical exponents βeb=0.50 ±0.02 , γeb=1.97 ±0.05 , and νeb=2.00 ±0.02 , fulfilling consistent critical scaling laws. Interestingly, however, the hyperscaling relation is violated. Using Binder's cumulant, we determine, with high precision, the critical probabilities peb for the triangular and tilted square lattice for site and bond percolation. This transition describes a sudden rigidification as a function of density when stretching a damaged tissue.
Variability of daily UV index in Jokioinen, Finland, in 1995-2015
NASA Astrophysics Data System (ADS)
Heikkilä, A.; Uusitalo, K.; Kärhä, P.; Vaskuri, A.; Lakkala, K.; Koskela, T.
2017-02-01
UV Index is a measure for UV radiation harmful for the human skin, developed and used to promote the sun awareness and protection of people. Monitoring programs conducted around the world have produced a number of long-term time series of UV irradiance. One of the longest time series of solar spectral UV irradiance in Europe has been obtained from the continuous measurements of Brewer #107 spectrophotometer in Jokioinen (lat. 60°44'N, lon. 23°30'E), Finland, over the years 1995-2015. We have used descriptive statistics and estimates of cumulative distribution functions, quantiles and probability density functions in the analysis of the time series of daily UV Index maxima. Seasonal differences in the estimated distributions and in the trends of the estimated quantiles are found.
Probabilistic simulation of stress concentration in composite laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, L.
1993-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The probabilistic composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties while probabilistic finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate stress concentration factors such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using it to simulate the stress concentration factors in composite laminates made from three different composite systems. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the stress concentration factors are influenced by local stiffness variables, by load eccentricities and by initial stress fields.
Barcelona de Mendoza, Veronica; Harville, Emily W; Savage, Jane; Giarratano, Gloria
2018-03-01
Both intimate partner violence and neighborhood crime have been associated with worse mental health outcomes, but less is known about cumulative effects. This association was studied in a sample of pregnant women who were enrolled in a study of disaster exposure, prenatal care, and mental and physical health outcomes between 2010 and 2012. Women were interviewed about their exposure to intimate partner violence and perceptions of neighborhood safety, crime, and disorder. Main study outcomes included symptoms of poor mental health; including depression, pregnancy-specific anxiety (PA), and posttraumatic stress disorder (PTSD). Logistic regression was used to examine predictors of mental health with adjustment for confounders. Women who experienced high levels of intimate partner violence and perceived neighborhood violence had increased odds of probable depression in individual models. Weighted high cumulative (intimate partner and neighborhood) experiences of violence were also associated with increased odds of having probable depression when compared with those with low violence. Weighed high cumulative violence was also associated with increased odds of PTSD. This study provides additional evidence that cumulative exposure to violence is associated with poorer mental health in pregnant women.
Structural bias in the sentencing of felony defendants.
Sutton, John R
2013-09-01
As incarceration rates have risen in the US, so has the overrepresentation of African Americans and Latinos among prison inmates. Whether and to what degree these disparities are due to bias in the criminal courts remains a contentious issue. This article pursues two lines of argument toward a structural account of bias in the criminal law, focusing on (1) cumulative disadvantages that may accrue over successive stages of the criminal justice process, and (2) the contexts of racial disadvantage in which courts are embedded. These arguments are tested using case-level data on male defendants charged with felony crimes in urban US counties in 2000. Multilevel binary and ordinal logit models are used to estimate contextual effects on pretrial detention, guilty pleas, and sentence severity, and cumulative effects are estimated as conditional probabilities that are allowed to vary by race across all three outcomes. Results yield strong, but qualified, evidence of cumulative disadvantage accruing to black and Latino defendants, but do not support the contextual hypotheses. When the cumulative effects of bias are taken into account, the estimated probability of the average African American or Latino felon going to prison is 26% higher than that of the average Anglo. Copyright © 2013 Elsevier Inc. All rights reserved.
An efficient distribution method for nonlinear transport problems in stochastic porous media
NASA Astrophysics Data System (ADS)
Ibrahima, F.; Tchelepi, H.; Meyer, D. W.
2015-12-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.
Forward modeling of gravity data using geostatistically generated subsurface density variations
Phelps, Geoffrey
2016-01-01
Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.
Distributed Immune Systems for Wireless Network Information Assurance
2010-04-26
ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability
Petropoulou, Anna D; Porcher, Raphael; Herr, Andrée-Laure; Devergie, Agnès; Brentano, Thomas Funck; Ribaud, Patricia; Pinto, Fernando O; Rocha, Vanderson; Peffault de Latour, Régis; Orcel, Philippe; Socié, Gérard; Robin, Marie
2010-06-15
Bone complications after hematopoietic stem-cell transplantation (HSCT) are relatively frequent. Evaluation of biomarkers of bone turnover and dual energy x-ray absorptiometry (DEXA) are not known in this context. We prospectively evaluated bone mineral density, biomarkers of bone turnover, and the cumulative incidence of bone complications after allogeneic HSCT. One hundred forty-six patients were included. Bone mineral density was measured by DEXA 2-month and 1-year post-HSCT. The markers of bone turnover were serum C-telopeptide (C-TP), 5 tartrate-resistant acid phosphatase (bone resorption), and osteocalcin (bone formation) determined pre-HSCT and 2 months and 1 year thereafter. Potential association between osteoporosis at 2 months, osteoporotic fracture or avascular necrosis and, individual patient's characteristics and biologic markers were tested. C-TP was high before and 2 months after transplant. At 2 months, DEXA detected osteoporosis in more than half the patients tested. Male sex, median age less than or equal to 15 years, and abnormal C-TP before HSCT were risk factors significantly associated with osteoporosis. Three-year cumulative incidences of fractures and avascular necrosis were 8% and 11%, respectively. Children were at higher risk of fracture, whereas corticosteroid treatment duration was a significant risk factor for developing a clinical bone complication post-HSCT. Bone complications and osteoporosis are frequent after HSCT. Bone biologic markers and DEXA showed that subclinical bone abnormalities appeared early post-HSCT. The risk factors, age, gender, and C-TP easily available at the time of transplantation were identified. Biphosphonates should probably be given to patients with those risk factors.
NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.
Jansen, Femke; Krebber, Anna M H; Coupé, Veerle M H; Cuijpers, Pim; de Bree, Remco; Becker-Commissaris, Annemarie; Smit, Egbert F; van Straten, Annemieke; Eeckhout, Guus M; Beekman, Aartjan T F; Leemans, C René; Verdonck-de Leeuw, Irma M
2017-01-20
Purpose A stepped care (SC) program in which an effective yet least resource-intensive treatment is delivered to patients first and followed, when necessary, by more resource-intensive treatments was found to be effective in improving distress levels of patients with head and neck cancer or lung cancer. Information on the value of this program for its cost is now called for. Therefore, this study aimed to assess the cost-utility of the SC program compared with care-as-usual (CAU) in patients with head and neck cancer or lung cancer who have psychological distress. Patients and Methods In total, 156 patients were randomly assigned to SC or CAU. Intervention costs, direct medical costs, direct nonmedical costs, productivity losses, and health-related quality-of-life data during the intervention or control period and 12 months of follow-up were calculated by using Trimbos and Institute of Medical Technology Assessment Cost Questionnaire for Psychiatry, Productivity and Disease Questionnaire, and EuroQol-5 Dimension measures and data from the hospital information system. The SC program's value for the cost was investigated by comparing mean cumulative costs and quality-adjusted life years (QALYs). Results After imputation of missing data, mean cumulative costs were -€3,950 (95% CI, -€8,158 to -€190) lower, and mean number of QALYs was 0.116 (95% CI, 0.005 to 0.227) higher in the intervention group compared with the control group. The intervention group had a probability of 96% that cumulative QALYs were higher and cumulative costs were lower than in the control group. Four additional analyses were conducted to assess the robustness of this finding, and they found that the intervention group had a probability of 84% to 98% that cumulative QALYs were higher and a probability of 91% to 99% that costs were lower than in the control group. Conclusion SC is highly likely to be cost-effective; the number of QALYs was higher and cumulative costs were lower for SC compared with CAU.
Setting cumulative emissions targets to reduce the risk of dangerous climate change
Zickfeld, Kirsten; Eby, Michael; Matthews, H. Damon; Weaver, Andrew J.
2009-01-01
Avoiding “dangerous anthropogenic interference with the climate system” requires stabilization of atmospheric greenhouse gas concentrations and substantial reductions in anthropogenic emissions. Here, we present an inverse approach to coupled climate-carbon cycle modeling, which allows us to estimate the probability that any given level of carbon dioxide (CO2) emissions will exceed specified long-term global mean temperature targets for “dangerous anthropogenic interference,” taking into consideration uncertainties in climate sensitivity and the carbon cycle response to climate change. We show that to stabilize global mean temperature increase at 2 °C above preindustrial levels with a probability of at least 0.66, cumulative CO2 emissions from 2000 to 2500 must not exceed a median estimate of 590 petagrams of carbon (PgC) (range, 200 to 950 PgC). If the 2 °C temperature stabilization target is to be met with a probability of at least 0.9, median total allowable CO2 emissions are 170 PgC (range, −220 to 700 PgC). Furthermore, these estimates of cumulative CO2 emissions, compatible with a specified temperature stabilization target, are independent of the path taken to stabilization. Our analysis therefore supports an international policy framework aimed at avoiding dangerous anthropogenic interference formulated on the basis of total allowable greenhouse gas emissions. PMID:19706489
Setting cumulative emissions targets to reduce the risk of dangerous climate change.
Zickfeld, Kirsten; Eby, Michael; Matthews, H Damon; Weaver, Andrew J
2009-09-22
Avoiding "dangerous anthropogenic interference with the climate system" requires stabilization of atmospheric greenhouse gas concentrations and substantial reductions in anthropogenic emissions. Here, we present an inverse approach to coupled climate-carbon cycle modeling, which allows us to estimate the probability that any given level of carbon dioxide (CO2) emissions will exceed specified long-term global mean temperature targets for "dangerous anthropogenic interference," taking into consideration uncertainties in climate sensitivity and the carbon cycle response to climate change. We show that to stabilize global mean temperature increase at 2 degrees C above preindustrial levels with a probability of at least 0.66, cumulative CO2 emissions from 2000 to 2500 must not exceed a median estimate of 590 petagrams of carbon (PgC) (range, 200 to 950 PgC). If the 2 degrees C temperature stabilization target is to be met with a probability of at least 0.9, median total allowable CO2 emissions are 170 PgC (range, -220 to 700 PgC). Furthermore, these estimates of cumulative CO2 emissions, compatible with a specified temperature stabilization target, are independent of the path taken to stabilization. Our analysis therefore supports an international policy framework aimed at avoiding dangerous anthropogenic interference formulated on the basis of total allowable greenhouse gas emissions.
NASA Astrophysics Data System (ADS)
Farrell, L. L.; McGovern, P. J.; Morgan, J. K.
2008-12-01
We have carried out 2-D numerical simulations using the discrete element method (DEM) to investigate density-driven deformation in volcanic edifices on Earth (e.g., Hawaii) and Mars (e.g., Olympus Mons and Arsia Mons). Located within volcanoes are series of magma chambers, reservoirs, and conduits where magma travels and collects. As magma differentiates, dense minerals settle out, building thick accumulations referred to as cumulates that can flow ductilely due to stresses imparted by gravity. To simulate this process, we construct granular piles subject to Coulomb frictional rheology, incrementally capture internal rectangular regions to which higher densities and lower interparticle friction values are assigned (analogs for denser, weaker cumulates), and then bond the granular edifice. Thus, following each growth increment, the edifice is allowed to relax gravitationally with a reconfigured weak cumulate core. The presence and outward spreading of the cumulate causes the development of distinctive structural and stratigraphic patterns. We obtained a range of volcanic shapes that vary from broad, shallowly dipping flanks reminiscent of those of Olympus Mons, to short, steep surface slopes more similar to Arsia Mons. Edifices lacking internal cumulate exhibit relatively horizontal strata compared to the high-angle, inward dipping strata that develops within the cumulate-bearing edifices. Our simulated volcanoes also illustrate a variety of gravity driven deformation features, including regions of thrust faulting within the flanks and large-scale flank collapses, as observed in Hawaii and inferred on Olympus Mons. We also see significant summit subsidence, and of particular interest, distinct summit calderas. The broad, flat caldera and convex upward profile of Arsia Mons appears to be well-simulated by cumulate-driven volcanic spreading. In contrast, the concave upward slopes of Olympus Mons are more challenging to reproduce, and instead are attributed to volcanic spreading along a pore-fluid- pressurized decollement with low basal friction.
Surface slip during large Owens Valley earthquakes
NASA Astrophysics Data System (ADS)
Haddon, E. K.; Amos, C. B.; Zielke, O.; Jayko, A. S.; Bürgmann, R.
2016-06-01
The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ˜1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ˜0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ˜6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7-11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ˜7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ˜0.6 and 1.6 mm/yr (1σ) over the late Quaternary.
NASA Astrophysics Data System (ADS)
Boslough, M.
2011-12-01
Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based global temperature anomaly data published by NASS GISS. Typical climate contracts predict the probability of a specified future temperature, but not the probability density or best estimate. One way to generate a probability distribution would be to create a family of contracts over a range of specified temperatures and interpret the price of each contract as its exceedance probability. The resulting plot of probability vs. anomaly is the market-based cumulative density function. The best estimate can be determined by interpolation, and the market-based uncertainty estimate can be based on the spread. One requirement for an effective prediction market is liquidity. Climate contracts are currently considered somewhat of a novelty and often lack sufficient liquidity, but climate change has the potential to generate both tremendous losses for some (e.g. agricultural collapse and extreme weather events) and wealth for others (access to natural resources and trading routes). Use of climate markets by large stakeholders has the potential to generate the liquidity necessary to make them viable. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's NNSA under contract DE-AC04-94AL85000.
High School Employment, School Performance, and College Entry
ERIC Educational Resources Information Center
Lee, Chanyoung; Orazem, Peter F.
2010-01-01
The proportion of U.S. high school students working during the school year ranges from 23% in the freshman year to 75% in the senior year. This study estimates how cumulative work histories during the high school years affect probability of dropout, high school academic performance, and the probability of attending college. Variations in…
Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual, appendix 2
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
The FORTRAN programs RANDOM3 and RANDOM4 are documented. They are based on fatigue strength reduction, using a probabilistic constitutive model. They predict the random lifetime of an engine component to reach a given fatigue strength. Included in this user manual are details regarding the theoretical backgrounds of RANDOM3 and RANDOM4. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B and C include photocopies of the actual computer printout corresponding to the sample problems. Appendices D and E detail the IMSL, Version 10(1), subroutines and functions called by RANDOM3 and RANDOM4 and SAS/GRAPH(2) programs that can be used to plot both the probability density functions (p.d.f.) and the cumulative distribution functions (c.d.f.).
Reliability analysis of degradable networks with modified BPR
NASA Astrophysics Data System (ADS)
Wang, Yu-Qing; Zhou, Chao-Fan; Jia, Bin; Zhu, Hua-Bing
2017-12-01
In this paper, the effect of the speed limit on degradable networks with capacity restrictions and the forced flow is investigated. The link performance function considering the road capacity is proposed. Additionally, the probability density distribution and the cumulative distribution of link travel time are introduced in the degradable network. By the mean of distinguishing the value of the speed limit, four cases are discussed, respectively. Means and variances of link travel time and route one of the degradable road network are calculated. Besides, by the mean of performing numerical simulation experiments in a specific network, it is found that the speed limit strategy can reduce the travel time budget and mean travel time of link and route. Moreover, it reveals that the speed limit strategy can cut down variances of the travel time of networks to some extent.
NASA Astrophysics Data System (ADS)
Ortega, R.; Gutierrez, E.; Carciumaru, D. D.; Huesca-Perez, E.
2017-12-01
We present a method to compute the conditional and no-conditional probability density function (PDF) of the finite fault distance distribution (FFDD). Two cases are described: lines and areas. The case of lines has a simple analytical solution while, in the case of areas, the geometrical probability of a fault based on the strike, dip, and fault segment vertices is obtained using the projection of spheres in a piecewise rectangular surface. The cumulative distribution is computed by measuring the projection of a sphere of radius r in an effective area using an algorithm that estimates the area of a circle within a rectangle. In addition, we introduce the finite fault distance metrics. This distance is the distance where the maximum stress release occurs within the fault plane and generates a peak ground motion. Later, we can apply the appropriate ground motion prediction equations (GMPE) for PSHA. The conditional probability of distance given magnitude is also presented using different scaling laws. A simple model of constant distribution of the centroid at the geometrical mean is discussed, in this model hazard is reduced at the edges because the effective size is reduced. Nowadays there is a trend of using extended source distances in PSHA, however it is not possible to separate the fault geometry from the GMPE. With this new approach, it is possible to add fault rupture models separating geometrical and propagation effects.
Schaubel, Douglas E; Wei, Guanghui
2011-03-01
In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Zivney, L. L.; Morgan, J. K.; McGovern, P. J.
2009-12-01
We have carried out 2-D numerical simulations using the discrete element method (DEM) to investigate density-driven deformation in Martian volcanic edifices. Our initial simulations demonstrated that gravitationally-driven settling of a dense, ductile cumulate body within a volcano causes enhanced lateral spreading of the edifice flanks, influencing the overall volcano morphology and generating pronounced summit subsidence. Here, we explore the effects of cumulate bodies and their geometries on the generation of summit calderas, to gain insight into the origin of Martian caldera complexes, in particular the Olympus Mons and Arsia Mons calderas. The Olympus Mons caldera, roughly 80 km in diameter, is composed of several small over-lapping craters with steep walls, thought to be produced by episodic collapse events of multiple shallow magma chambers. The Arsia Mons caldera spans ~130 km across and displays one prominent crater with gently sloping margins, possibly reflecting the collapse of a single magma chamber. Although the depth of the magma chamber is debated, its lateral width is thought to approximate the diameter of the caldera. Our models indicate that cumulate bodies located at shallow depths of <10 km below the edifice surface produce caldera complexes on the order of 80-100 km in width, with increasing cumulate widths producing widening calderas. Narrow cumulate bodies with densities near 4000 kg/m3 produce the deepest calderas (up to ~8 km deep). We conclude that the generation of large Arsia-type calderas may be adequately modeled by the presence of a wide cumulate body found at shallow depths beneath the summit. Although we do not model the multiple magma chamber systems thought to exist beneath the Olympus Mons summit, the closely spaced craters and the small size of the caldera relative to the size of the volcano (~13% of the edifice) suggests that the cumulate body would be narrow; our simulations of a single narrow cumulate body are capable of generating summit subsidence that is similar in dimension to the Olympus Mons caldera. Our findings suggest that cumulate spreading may play a primary role in the long-term development of caldera geometry, although the collapse of magma reservoirs (not modeled here) may cause important short-term changes in caldera structure.
The frequency distribution of daily global irradiation at Kumasi
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akuffo, F.O.; Brew-Hammond, A.
1993-02-01
Cumulative frequency distribution curves (CDC) for daily global irradiation on the horizontal produced by Liu and Jordan in 1963 have until recently been considered to have universal validity. Results obtained by Saunier et al. in 1987 and Ideriah and Suleman in 1989 for two tropical locations, Ibadan in Nigeria and Bangkok in Thailand, respectively, have thrown into question the universal validity of the Liu and Jordan generalized CDC. Saunier et al., in particular, showed that their results disagreed with the generalized CDC mainly because of differences in the values of the maximum clearness index (Kmax), as well as the underlyingmore » probability density functions. Consequently, they proposed two expressions for determining Kmax and probability densities in tropical locations. This paper presents the results of statistical analysis of daily global irradiation for Kumasi, Ghana, also a tropical location. The results show that the expressions of Saunier et al. provide a better description of the observations than the generalized CDC and, in particular, the empirical equation for Kmax may be valid for Kumasi. Furthermore, the results show that the values of the minimum clearness index (Kmin) for Kumasi are much higher than the generally accepted value of 0.05 for overcast sky conditions. A comparison of the results for Kumasi and Ibadan shows that there is satisfactory agreement when the values of Kmax and Kmin are comparable; in cases where there are discrepancies in the Kmax and Kmin values, the CDC also disagree. 13 refs., 3 figs., 5 tabs.« less
Lopes, Letícia Helena Caldas; Sdepanian, Vera Lucia; Szejnfeld, Vera Lúcia; de Morais, Mauro Batista; Fagundes-Neto, Ulysses
2008-10-01
To evaluate bone mineral density of the lumbar spine in children and adolescents with inflammatory bowel disease, and to identify the clinical risk factors associated with low bone mineral density. Bone mineral density of the lumbar spine was evaluated using dual-energy X-ray absorptiometry (DXA) in 40 patients with inflammatory bowel disease. Patients were 11.8 (SD = 4.1) years old and most of them were male (52.5%). Multiple linear regression analysis was performed to identify potential associations between bone mineral density Z-score and age, height-for-age Z-score, BMI Z-score, cumulative corticosteroid dose in milligrams and in milligrams per kilogram, disease duration, number of relapses, and calcium intake according to the dietary reference intake. Low bone mineral density (Z-score bellow -2) was observed in 25% of patients. Patients with Crohn's disease and ulcerative colitis had equivalent prevalence of low bone mineral density. Multiple linear regression models demonstrated that height-for-age Z-score, BMI Z-score, and cumulative corticosteroid dose in mg had independent effects on BMD, respectively, beta = 0.492 (P = 0.000), beta = 0.460 (P = 0.001), beta = - 0.014 (P = 0.000), and these effects remained significant after adjustments for disease duration, respectively, beta = 0.489 (P = 0.013), beta = 0.467 (P = 0.001), and beta = - 0.005 (P = 0.015). The model accounted for 54.6% of the variability of the BMD Z-score (adjusted R2 = 0.546). The prevalence of low bone mineral density in children and adolescents with inflammatory bowel disease is considerably high and independent risk factors associated with bone mineral density are corticosteroid cumulative dose in milligrams, height-for-age Z-score, and BMI Z-score.
Shen, Weidong; Sakamoto, Naoko; Yang, Limin
2016-07-07
The objectives of this study were to evaluate and model the probability of melanoma-specific death and competing causes of death for patients with melanoma by competing risk analysis, and to build competing risk nomograms to provide individualized and accurate predictive tools. Melanoma data were obtained from the Surveillance Epidemiology and End Results program. All patients diagnosed with primary non-metastatic melanoma during the years 2004-2007 were potentially eligible for inclusion. The cumulative incidence function (CIF) was used to describe the probability of melanoma mortality and competing risk mortality. We used Gray's test to compare differences in CIF between groups. The proportional subdistribution hazard approach by Fine and Gray was used to model CIF. We built competing risk nomograms based on the models that we developed. The 5-year cumulative incidence of melanoma death was 7.1 %, and the cumulative incidence of other causes of death was 7.4 %. We identified that variables associated with an elevated probability of melanoma-specific mortality included older age, male sex, thick melanoma, ulcerated cancer, and positive lymph nodes. The nomograms were well calibrated. C-indexes were 0.85 and 0.83 for nomograms predicting the probability of melanoma mortality and competing risk mortality, which suggests good discriminative ability. This large study cohort enabled us to build a reliable competing risk model and nomogram for predicting melanoma prognosis. Model performance proved to be good. This individualized predictive tool can be used in clinical practice to help treatment-related decision making.
CUMBIN - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.
High-resolution SMA imaging of bright submillimetre sources from the SCUBA-2 Cosmology Legacy Survey
NASA Astrophysics Data System (ADS)
Hill, Ryley; Chapman, Scott C.; Scott, Douglas; Petitpas, Glen; Smail, Ian; Chapin, Edward L.; Gurwell, Mark A.; Perry, Ryan; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Dunlop, James S.; Farrah, Duncan; Fazio, Giovanni G.; Geach, James E.; Howson, Paul; Ivison, R. J.; Lacaille, Kevin; Michałowski, Michał J.; Simpson, James M.; Swinbank, A. M.; van der Werf, Paul P.; Wilner, David J.
2018-06-01
We have used the Submillimeter Array (SMA) at 860 μm to observe the brightest sources in the Submillimeter Common User Bolometer Array-2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). The goal of this survey is to exploit the large field of the S2CLS along with the resolution and sensitivity of the SMA to construct a large sample of these rare sources and to study their statistical properties. We have targeted 70 of the brightest single-dish SCUBA-2 850 μm sources down to S850 ≈ 8 mJy, achieving an average synthesized beam of 2.4 arcsec and an average rms of σ860 = 1.5 mJy beam-1 in our primary beam-corrected maps. We searched our SMA maps for 4σ peaks, corresponding to S860 ≳ 6 mJy sources, and detected 62, galaxies, including three pairs. We include in our study 35 archival observations, bringing our sample size to 105 bright single-dish submillimetre sources with interferometric follow-up. We compute the cumulative and differential number counts, finding them to overlap with previous single-dish survey number counts within the uncertainties, although our cumulative number count is systematically lower than the parent S2CLS cumulative number count by 14 ± 6 per cent between 11 and 15 mJy. We estimate the probability that a ≳10 mJy single-dish submillimetre source resolves into two or more galaxies with similar flux densities to be less than 15 per cent. Assuming the remaining 85 per cent of the targets are ultraluminous starburst galaxies between z = 2 and 3, we find a likely volume density of ≳400 M⊙ yr-1 sources to be {˜ } 3^{+0.7}_{-0.6} {× } 10^{-7} Mpc-3. We show that the descendants of these galaxies could be ≳4 × 1011 M⊙ local quiescent galaxies, and that about 10 per cent of their total stellar mass would have formed during these short bursts of star formation.
Dong, Huiru; Robison, Leslie L; Leisenring, Wendy M; Martin, Leah J; Armstrong, Gregory T; Yasui, Yutaka
2015-04-01
Cumulative incidence has been widely used to estimate the cumulative probability of developing an event of interest by a given time, in the presence of competing risks. When it is of interest to measure the total burden of recurrent events in a population, however, the cumulative incidence method is not appropriate because it considers only the first occurrence of the event of interest for each individual in the analysis: Subsequent occurrences are not included. Here, we discuss a straightforward and intuitive method termed "mean cumulative count," which reflects a summarization of all events that occur in the population by a given time, not just the first event for each subject. We explore the mathematical relationship between mean cumulative count and cumulative incidence. Detailed calculation of mean cumulative count is described by using a simple hypothetical example, and the computation code with an illustrative example is provided. Using follow-up data from January 1975 to August 2009 collected in the Childhood Cancer Survivor Study, we show applications of mean cumulative count and cumulative incidence for the outcome of subsequent neoplasms to demonstrate different but complementary information obtained from the 2 approaches and the specific utility of the former. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Evidence of scaling of void probability in nucleus-nucleus interactions at few GeV energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Dipak; Biswas, Biswanath; Deb, Argha
1997-11-01
The rapidity gap probability in the {sup 24}Mg-AgBr interaction at 4.5GeV/c/nucleon has been studied in detail. The data reveal scaling behavior of the void probability in the central rapidity domain which confirms the validity of the linked-pair approximation for the N-particle cumulant correlation functions. This scaling behavior appears to be similar to the void probability in the Perseus-Pisces supercluster region of galaxies. {copyright} {ital 1997} {ital The American Physical Society}
Barnes, Timothy L.; Colabianchi, Natalie; Hibbert, James D.; Porter, Dwayne E.; Lawson, Andrew B.; Liese, Angela D.
2016-01-01
Choice of neighborhood scale affects associations between environmental attributes and health-related outcomes. This phenomenon, a part of the modifiable areal unit problem, has been described fully in geography but not as it relates to food environment research. Using two administrative-based geographic boundaries (census tracts and block groups), supermarket geographic measures (density, cumulative opportunity and distance to nearest) were created to examine differences by scale and associations between three common U.S. Census–based socioeconomic status (SES) characteristics (median household income, percentage of population living below poverty and percentage of population with at least a high school education) and a summary neighborhood SES z-score in an eight-county region of South Carolina. General linear mixed-models were used. Overall, both supermarket density and cumulative opportunity were higher when using census tract boundaries compared to block groups. In analytic models, higher median household income was significantly associated with lower neighborhood supermarket density and lower cumulative opportunity using either the census tract or block group boundaries, and neighborhood poverty was positively associated with supermarket density and cumulative opportunity. Both median household income and percent high school education were positively associated with distance to nearest supermarket using either boundary definition, whereas neighborhood poverty had an inverse association. Findings from this study support the premise that supermarket measures can differ by choice of geographic scale and can influence associations between measures. Researchers should consider the most appropriate geographic scale carefully when conducting food environment studies. PMID:27022204
Barnes, Timothy L; Colabianchi, Natalie; Hibbert, James D; Porter, Dwayne E; Lawson, Andrew B; Liese, Angela D
2016-03-01
Choice of neighborhood scale affects associations between environmental attributes and health-related outcomes. This phenomenon, a part of the modifiable areal unit problem, has been described fully in geography but not as it relates to food environment research. Using two administrative-based geographic boundaries (census tracts and block groups), supermarket geographic measures (density, cumulative opportunity and distance to nearest) were created to examine differences by scale and associations between three common U.S. Census-based socioeconomic status (SES) characteristics (median household income, percentage of population living below poverty and percentage of population with at least a high school education) and a summary neighborhood SES z-score in an eight-county region of South Carolina. General linear mixed-models were used. Overall, both supermarket density and cumulative opportunity were higher when using census tract boundaries compared to block groups. In analytic models, higher median household income was significantly associated with lower neighborhood supermarket density and lower cumulative opportunity using either the census tract or block group boundaries, and neighborhood poverty was positively associated with supermarket density and cumulative opportunity. Both median household income and percent high school education were positively associated with distance to nearest supermarket using either boundary definition, whereas neighborhood poverty had an inverse association. Findings from this study support the premise that supermarket measures can differ by choice of geographic scale and can influence associations between measures. Researchers should consider the most appropriate geographic scale carefully when conducting food environment studies.
Work Disability among Women: The Role of Divorce in a Retrospective Cohort Study.
Tamborini, Christopher R; Reznik, Gayle L; Couch, Kenneth A
2016-03-01
We assess how divorce through midlife affects the subsequent probability of work-limiting health among U.S. women. Using retrospective marital and work disability histories from the Survey of Income and Program Participation matched to Social Security earnings records, we identify women whose first marriage dissolved between 1975 and 1984 (n = 1,214) and women who remain continuously married (n = 3,394). Probit and propensity score matching models examine the cumulative probability of a work disability over a 20-year follow-up period. We find that divorce is associated with a significantly higher cumulative probability of a work disability, controlling for a range of factors. This association is strongest among divorced women who do not remarry. No consistent relationships are observed among divorced women who remarry and remained married. We find that economic hardship, work history, and selection into divorce influence, but do not substantially alter, the lasting impact of divorce on work-limiting health. © American Sociological Association 2016.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Pérez, Miguel A
2007-01-01
The aim of this study was to address the effect of objective age of acquisition (AoA) on picture-naming latencies when different measures of frequency (cumulative and adult word frequency) and frequency trajectory are taken into account. A total of 80 Spanish participants named a set of 178 pictures. Several multiple regression analyses assessed the influence of AoA, word frequency, frequency trajectory, object familiarity, name agreement, image agreement, image variability, name length, and orthographic neighbourhood density on naming times. The results revealed that AoA is the main predictor of picture-naming times. Cumulative frequency and adult word frequency (written or spoken) appeared as important factors in picture naming, but frequency trajectory and object familiarity did not. Other significant variables were image agreement, image variability, and neighbourhood density. These results (a) provide additional evidence of the predictive power of AoA in naming times independent of word-frequency and (b) suggest that image variability and neighbourhood density should also be taken into account in models of lexical production.
Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, A. M.; McGhee, D. S.
2003-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.
Statistical Comparison and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; McGhee, David S.
2004-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield ultimate strength and high cycle fatigue capability. This paper examines the cumulative distribution function (CDF) percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematics is then used to calculate the combined value corresponding to any desired percentile along with a curve fit to this value. Another Excel macro is used to calculate the combination using a Monte Carlo simulation. Unlike the traditional techniques, these methods quantify the calculated load value with a Consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Also, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can lower the design loading substantially without losing any of the identified structural reliability.
A statistical study of gyro-averaging effects in a reduced model of drift-wave transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonseca, Julio; Del-Castillo-Negrete, Diego B.; Sokolov, Igor M.
2016-08-25
Here, a statistical study of finite Larmor radius (FLR) effects on transport driven by electrostatic driftwaves is presented. The study is based on a reduced discrete Hamiltonian dynamical system known as the gyro-averaged standard map (GSM). In this system, FLR effects are incorporated through the gyro-averaging of a simplified weak-turbulence model of electrostatic fluctuations. Formally, the GSM is a modified version of the standard map in which the perturbation amplitude, K 0, becomes K 0J 0(more » $$\\hat{p}$$), where J 0 is the zeroth-order Bessel function and $$\\hat{p}$$ s the Larmor radius. Assuming a Maxwellian probability density function (pdf) for $$\\hat{p}$$ , we compute analytically and numerically the pdf and the cumulative distribution function of the effective drift-wave perturba- tion amplitude K 0J 0($$\\hat{p}$$). Using these results, we compute the probability of loss of confinement (i.e., global chaos), P c provides an upper bound for the escape rate, and that P t rovides a good estimate of the particle trapping rate. Lastly. the analytical results are compared with direct numerical Monte-Carlo simulations of particle transport.« less
Rosenbaum Asarnow, Joan; Berk, Michele; Zhang, Lily; Wang, Peter; Tang, Lingqi
2017-10-01
This prospective study of suicidal emergency department (ED) patients (ages 10-18) examined the timing, cumulative probability, and predictors of suicide attempts through 18 months of follow-up. The cumulative probability of attempts was as follows: .15 at 6 months, .22 at 1 year, and .24 by 18 months. One attempt was fatal, yielding a death rate of .006. Significant predictors of suicide attempt risk included a suicide attempt at ED presentation (vs. suicidal ideation only), nonsuicidal self-injurious behavior, and low levels of delinquent symptoms. Results underscore the importance of both prior suicide attempts and nonsuicidal self-harm as risk indicators for future and potentially lethal suicide attempts. © 2016 The American Association of Suicidology.
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
ERIC Educational Resources Information Center
Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.
2010-01-01
Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…
Atmospheric Teleconnections From Cumulants
NASA Astrophysics Data System (ADS)
Sabou, F.; Kaspi, Y.; Marston, B.; Schneider, T.
2011-12-01
Multi-point cumulants of fields such as vorticity provide a way to visualize atmospheric teleconnections, complementing other approaches such as the method of empirical orthogonal functions (EOFs). We calculate equal-time two-point cumulants of the vorticity from NCEP reanalysis data during the period 1980 -- 2010 and from direct numerical simulation (DNS) using an idealized dry general circulation model (GCM) (Schneider and Walker, 2006). Extratropical correlations seen in the NCEP data are qualitatively reproduced by the model. Three- and four-point cumulants accumulated from DNS quantify departures of the probability distribution function from a normal distribution, shedding light on the efficacy of direct statistical simulation (DSS) of atmosphere dynamics by cumulant expansions (Marston, Conover, and Schneider, 2008; Marston 2011). Lagged-time two-point cumulants between temperature gradients and eddy kinetic energy (EKE), accumulated by DNS of an idealized moist aquaplanet GCM (O'Gorman and Schneider, 2008), reveal dynamics of storm tracks. Regions of enhanced baroclinicity (as found along the eastern boundary of continents) lead to a local enhancement of EKE and a suppression of EKE further downstream as the storm track self-destructs (Kaspi and Schneider, 2011).
Arreyndip, Nkongho Ayuketang; Joseph, Ebobenow; David, Afungchui
2016-11-01
For the future installation of a wind farm in Cameroon, the wind energy potentials of three of Cameroon's coastal cities (Kribi, Douala and Limbe) are assessed using NASA average monthly wind data for 31 years (1983-2013) and compared through Weibull statistics. The Weibull parameters are estimated by the method of maximum likelihood, the mean power densities, the maximum energy carrying wind speeds and the most probable wind speeds are also calculated and compared over these three cities. Finally, the cumulative wind speed distributions over the wet and dry seasons are also analyzed. The results show that the shape and scale parameters for Kribi, Douala and Limbe are 2.9 and 2.8, 3.9 and 1.8 and 3.08 and 2.58, respectively. The mean power densities through Weibull analysis for Kribi, Douala and Limbe are 33.7 W/m2, 8.0 W/m2 and 25.42 W/m2, respectively. Kribi's most probable wind speed and maximum energy carrying wind speed was found to be 2.42 m/s and 3.35 m/s, 2.27 m/s and 3.03 m/s for Limbe and 1.67 m/s and 2.0 m/s for Douala, respectively. Analysis of the wind speed and hence power distribution over the wet and dry seasons shows that in the wet season, August is the windiest month for Douala and Limbe while September is the windiest month for Kribi while in the dry season, March is the windiest month for Douala and Limbe while February is the windiest month for Kribi. In terms of mean power density, most probable wind speed and wind speed carrying maximum energy, Kribi shows to be the best site for the installation of a wind farm. Generally, the wind speeds at all three locations seem quite low, average wind speeds of all the three studied locations fall below 4.0m/s which is far below the cut-in wind speed of many modern wind turbines. However we recommend the use of low cut-in speed wind turbines like the Savonius for stand alone low energy needs.
Fatigue crack growth model RANDOM2 user manual, appendix 1
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
The FORTRAN program RANDOM2 is documented. RANDOM2 is based on fracture mechanics using a probabilistic fatigue crack growth model. It predicts the random lifetime of an engine component to reach a given crack size. Included in this user manual are details regarding the theoretical background of RANDOM2, input data, instructions and a sample problem illustrating the use of RANDOM2. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B includes photocopies of the actual computer printout corresponding to the sample problem. Appendices C and D detail the IMSL, Ver. 10(1), subroutines and functions called by RANDOM2 and a SAS/GRAPH(2) program that can be used to plot both the probability density function (p.d.f.) and the cumulative distribution function (c.d.f.).
Probabilistic Analysis of Large-Scale Composite Structures Using the IPACS Code
NASA Technical Reports Server (NTRS)
Lemonds, Jeffrey; Kumar, Virendra
1995-01-01
An investigation was performed to ascertain the feasibility of using IPACS (Integrated Probabilistic Assessment of Composite Structures) for probabilistic analysis of a composite fan blade, the development of which is being pursued by various industries for the next generation of aircraft engines. A model representative of the class of fan blades used in the GE90 engine has been chosen as the structural component to be analyzed with IPACS. In this study, typical uncertainties are assumed in the level, and structural responses for ply stresses and frequencies are evaluated in the form of cumulative probability density functions. Because of the geometric complexity of the blade, the number of plies varies from several hundred at the root to about a hundred at the tip. This represents a extremely complex composites application for the IPACS code. A sensitivity study with respect to various random variables is also performed.
Canuet, Lucien; Védrenne, Nicolas; Conan, Jean-Marc; Petit, Cyril; Artaud, Geraldine; Rissons, Angelique; Lacan, Jerome
2018-01-01
In the framework of satellite-to-ground laser downlinks, an analytical model describing the variations of the instantaneous coupled flux into a single-mode fiber after correction of the incoming wavefront by partial adaptive optics (AO) is presented. Expressions for the probability density function and the cumulative distribution function as well as for the average fading duration and fading duration distribution of the corrected coupled flux are given. These results are of prime interest for the computation of metrics related to coded transmissions over correlated channels, and they are confronted by end-to-end wave-optics simulations in the case of a geosynchronous satellite (GEO)-to-ground and a low earth orbit satellite (LEO)-to-ground scenario. Eventually, the impact of different AO performances on the aforementioned fading duration distribution is analytically investigated for both scenarios.
Symmetry for the duration of entropy-consuming intervals.
García-García, Reinaldo; Domínguez, Daniel
2014-05-01
We introduce the violation fraction υ as the cumulative fraction of time that a mesoscopic system spends consuming entropy at a single trajectory in phase space. We show that the fluctuations of this quantity are described in terms of a symmetry relation reminiscent of fluctuation theorems, which involve a function Φ, which can be interpreted as an entropy associated with the fluctuations of the violation fraction. The function Φ, when evaluated for arbitrary stochastic realizations of the violation fraction, is odd upon the symmetry transformations that are relevant for the associated stochastic entropy production. This fact leads to a detailed fluctuation theorem for the probability density function of Φ. We study the steady-state limit of this symmetry in the paradigmatic case of a colloidal particle dragged by optical tweezers through an aqueous solution. Finally, we briefly discuss possible applications of our results for the estimation of free-energy differences from single-molecule experiments.
Trivariate characteristics of intensity fluctuations for heavily saturated optical systems.
Das, Biman; Drake, Eli; Jack, John
2004-02-01
Trivariate cumulants of intensity fluctuations have been computed starting from a trivariate intensity probability distribution function, which rests on the assumption that the variation of intensity has a maximum entropy distribution with the constraint that the total intensity is constant. The assumption holds for optical systems such as a thin, long, mirrorless gas laser amplifier where under heavy gain saturation the total output approaches a constant intensity, although intensity of any mode fluctuates rapidly over the average intensity. The relations between trivariate cumulants and central moments that were needed for the computation of trivariate cumulants were derived. The results of the computation show that the cumulants have characteristic values that depend on the number of interacting modes in the system. The cumulant values approach zero when the number of modes is infinite, as expected. The results will be useful for comparison with the experimental triavariate statistics of heavily saturated optical systems such as the output from a thin, long, bidirectional gas laser amplifier.
A cumulant functional for static and dynamic correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollett, Joshua W., E-mail: j.hollett@uwinnipeg.ca; Department of Chemistry, University of Manitoba, Winnipeg, Manitoba R3T 2N2; Hosseini, Hessam
A functional for the cumulant energy is introduced. The functional is composed of a pair-correction and static and dynamic correlation energy components. The pair-correction and static correlation energies are functionals of the natural orbitals and the occupancy transferred between near-degenerate orbital pairs, rather than the orbital occupancies themselves. The dynamic correlation energy is a functional of the statically correlated on-top two-electron density. The on-top density functional used in this study is the well-known Colle-Salvetti functional. Using the cc-pVTZ basis set, the functional effectively models the bond dissociation of H{sub 2}, LiH, and N{sub 2} with equilibrium bond lengths and dissociationmore » energies comparable to those provided by multireference second-order perturbation theory. The performance of the cumulant functional is less impressive for HF and F{sub 2}, mainly due to an underestimation of the dynamic correlation energy by the Colle-Salvetti functional.« less
Probabilistic analysis of preload in the abutment screw of a dental implant complex.
Guda, Teja; Ross, Thomas A; Lang, Lisa A; Millwater, Harry R
2008-09-01
Screw loosening is a problem for a percentage of implants. A probabilistic analysis to determine the cumulative probability distribution of the preload, the probability of obtaining an optimal preload, and the probabilistic sensitivities identifying important variables is lacking. The purpose of this study was to examine the inherent variability of material properties, surface interactions, and applied torque in an implant system to determine the probability of obtaining desired preload values and to identify the significant variables that affect the preload. Using software programs, an abutment screw was subjected to a tightening torque and the preload was determined from finite element (FE) analysis. The FE model was integrated with probabilistic analysis software. Two probabilistic analysis methods (advanced mean value and Monte Carlo sampling) were applied to determine the cumulative distribution function (CDF) of preload. The coefficient of friction, elastic moduli, Poisson's ratios, and applied torque were modeled as random variables and defined by probability distributions. Separate probability distributions were determined for the coefficient of friction in well-lubricated and dry environments. The probabilistic analyses were performed and the cumulative distribution of preload was determined for each environment. A distinct difference was seen between the preload probability distributions generated in a dry environment (normal distribution, mean (SD): 347 (61.9) N) compared to a well-lubricated environment (normal distribution, mean (SD): 616 (92.2) N). The probability of obtaining a preload value within the target range was approximately 54% for the well-lubricated environment and only 0.02% for the dry environment. The preload is predominately affected by the applied torque and coefficient of friction between the screw threads and implant bore at lower and middle values of the preload CDF, and by the applied torque and the elastic modulus of the abutment screw at high values of the preload CDF. Lubrication at the threaded surfaces between the abutment screw and implant bore affects the preload developed in the implant complex. For the well-lubricated surfaces, only approximately 50% of implants will have preload values within the generally accepted range. This probability can be improved by applying a higher torque than normally recommended or a more closely controlled torque than typically achieved. It is also suggested that materials with higher elastic moduli be used in the manufacture of the abutment screw to achieve a higher preload.
The Minimum-Mass Surface Density of the Solar Nebula using the Disk Evolution Equation
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
2005-01-01
The Hayashi minimum-mass power law representation of the pre-solar nebula (Hayashi 1981, Prog. Theo. Phys.70,35) is revisited using analytic solutions of the disk evolution equation. A new cumulative-planetary-mass-model (an integrated form of the surface density) is shown to predict a smoother surface density compared with methods based on direct estimates of surface density from planetary data. First, a best-fit transcendental function is applied directly to the cumulative planetary mass data with the surface density obtained by direct differentiation. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the planetary data. The latter model indicates a decay rate of r -1/2 in the inner disk followed by a rapid decay which results in a sharper outer boundary than predicted by the minimum mass model. The model is shown to be a good approximation to the finite-size early Solar Nebula and by extension to extra solar protoplanetary disks.
Probability and predictors of the cannabis gateway effect: A national study
Secades-Villa, Roberto; Garcia-Rodríguez, Olaya; Jin, Chelsea, J.; Wang, Shuai; Blanco, Carlos
2014-01-01
Background While several studies have shown a high association between cannabis use and use of other illicit drugs, the predictors of progression from cannabis to other illicit drugs remain largely unknown. This study aims to estimate the cumulative probability of progression to illicit drug use among individuals with lifetime history of cannabis use, and to identify predictors of progression from cannabis use to other illicit drugs use. Methods Analyses were conducted on the sub-sample of participants in Wave 1of the National Epidemiological Survey on Alcohol and Related Conditions (NESARC) who started cannabis use before using any other drug (n= 6,624). Estimated projections of the cumulative probability of progression from cannabis use to use of any other illegal drug use in the general population were obtained by the standard actuarial method. Univariate and multivariable survival analyses with time-varying covariates were implemented to identify predictors of progression to any drug use. Results Lifetime cumulative probability estimates indicated that 44.7% of individuals with lifetime cannabis use progressed to other illicit drug use at some time in their lives. Several sociodemographic characteristics, internalizing and externalizing psychiatric disorders and indicators of substance use severity predicted progression from cannabis use to other illicit drugs use. Conclusion A large proportion of individuals who use cannabis go on to use other illegal drugs. The increased risk of progression from cannabis use to other illicit drugs use among individuals with mental disorders underscores the importance of considering the benefits and adverse effects of changes in cannabis regulations and of developing prevention and treatment strategies directed at curtailing cannabis use in these populations. PMID:25168081
P.H. Cochran; Walter G. Dahms
2000-01-01
Concave curvilinear decreases in diameter growth occurred with increasing stand densities. A convex curvilinear increase in gross growth of basal area and total cubic volume took place with increasing stand density. Maximum cumulative net cubic (total and merchantable) and board-foot yields varied curvilinearly with stand density. These yields peaked at intermediate...
1981-02-01
monotonic increasing function of true ability or performance score. A cumulative probability function is * then very convenient for describiny; one’s...possible outcomes such as test scores, grade-point averages or other common outcome variables. Utility is usually a monotonic increasing function of true ...r(0) is negative for 8 <i and positive for 0 > M, U(o) is risk-prone for low 0 values and risk-averse for high 0 values. This property is true for
Probability and Conditional Probability of Cumulative Cloud Cover for Selected Stations Worldwide.
1985-07-01
INTRODUCTION The performance of precision-guided munition (PGM) systems may be severely compromised by the presence of clouds in the desired target...Korea 37.98 N 12794 E Mar 67-Dec 79 4Ku san, Korea 37.90 N 126.63 E Jaug 51-Dec 81 (No Jan 71-Dec 72) 4 -7141- Taegu & Tonchon, Korea 35.90 N 128.67 E Jan
Decision making generalized by a cumulative probability weighting function
NASA Astrophysics Data System (ADS)
dos Santos, Lindomar Soares; Destefano, Natália; Martinez, Alexandre Souto
2018-01-01
Typical examples of intertemporal decision making involve situations in which individuals must choose between a smaller reward, but more immediate, and a larger one, delivered later. Analogously, probabilistic decision making involves choices between options whose consequences differ in relation to their probability of receiving. In Economics, the expected utility theory (EUT) and the discounted utility theory (DUT) are traditionally accepted normative models for describing, respectively, probabilistic and intertemporal decision making. A large number of experiments confirmed that the linearity assumed by the EUT does not explain some observed behaviors, as nonlinear preference, risk-seeking and loss aversion. That observation led to the development of new theoretical models, called non-expected utility theories (NEUT), which include a nonlinear transformation of the probability scale. An essential feature of the so-called preference function of these theories is that the probabilities are transformed by decision weights by means of a (cumulative) probability weighting function, w(p) . We obtain in this article a generalized function for the probabilistic discount process. This function has as particular cases mathematical forms already consecrated in the literature, including discount models that consider effects of psychophysical perception. We also propose a new generalized function for the functional form of w. The limiting cases of this function encompass some parametric forms already proposed in the literature. Far beyond a mere generalization, our function allows the interpretation of probabilistic decision making theories based on the assumption that individuals behave similarly in the face of probabilities and delays and is supported by phenomenological models.
Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A
2006-11-01
A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.
Huang, Xiaowei; Zhang, Yanling; Meng, Long; Abbott, Derek; Qian, Ming; Wong, Kelvin K L; Zheng, Rongqing; Zheng, Hairong; Niu, Lili
2017-01-01
Carotid plaque echogenicity is associated with the risk of cardiovascular events. Gray-scale median (GSM) of the ultrasound image of carotid plaques has been widely used as an objective method for evaluation of plaque echogenicity in patients with atherosclerosis. We proposed a computer-aided method to evaluate plaque echogenicity and compared its efficiency with GSM. One hundred and twenty-five carotid plaques (43 echo-rich, 35 intermediate, 47 echolucent) were collected from 72 patients in this study. The cumulative probability distribution curves were obtained based on statistics of the pixels in the gray-level images of plaques. The area under the cumulative probability distribution curve (AUCPDC) was calculated as its integral value to evaluate plaque echogenicity. The classification accuracy for three types of plaques is 78.4% (kappa value, κ = 0.673), when the AUCPDC is used for classifier training, whereas GSM is 64.8% (κ = 0.460). The receiver operating characteristic curves were produced to test the effectiveness of AUCPDC and GSM for the identification of echolucent plaques. The area under the curve (AUC) was 0.817 when AUCPDC was used for training the classifier, which is higher than that achieved using GSM (AUC = 0.746). Compared with GSM, the AUCPDC showed a borderline association with coronary heart disease (Spearman r = 0.234, p = 0.050). Our experimental results suggest that AUCPDC analysis is a promising method for evaluation of plaque echogenicity and predicting cardiovascular events in patients with plaques.
Forest regeneration composition and development in upland, mixed-oak forests.
Fei, Songlin; Gould, Peter J; Steiner, Kim C; Finley, James C; McDill, Marc E
2005-12-01
Advance regeneration in 52 mature mixed-oak stands was analyzed and described. Red maple (Acer rubrum L.) was the most abundant species in the study area. Among oak (Quercus) species, northern red oak (Q. rubra L.) was the most abundant within the Allegheny Plateau physiographic province, whereas chestnut oak (Q. montana L.) was the most abundant within the Ridge and Valley physiographic province. Sixteen stands, for which data are available through the fourth growing season following harvest, were used to describe stand development. Cumulative height, a composite measure of size and density, was used to describe early stand development. Black gum (Nyssa sylvatica Marsh.) and black birch (Betula lenta L.) had dramatic increases in stand density and cumulative height after overstory removal. Cumulative height of northern red oak and chestnut oak showed a faster positive response to overstory removal than red maple. Oak retained its dominance in cumulative height for at least 4 years after harvest. Red maple nevertheless remained the most abundant tree species after overstory removal. Our results suggest that the principal advantage of red maple regeneration is its ability to accumulate in large numbers prior to harvest.
On the optimal identification of tag sets in time-constrained RFID configurations.
Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel
2011-01-01
In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.
Probabilistic Evaluation of Blade Impact Damage
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Abumeri, G. H.
2003-01-01
The response to high velocity impact of a composite blade is probabilistically evaluated. The evaluation is focused on quantifying probabilistically the effects of uncertainties (scatter) in the variables that describe the impact, the blade make-up (geometry and material), the blade response (displacements, strains, stresses, frequencies), the blade residual strength after impact, and the blade damage tolerance. The results of probabilistic evaluations results are in terms of probability cumulative distribution functions and probabilistic sensitivities. Results show that the blade has relatively low damage tolerance at 0.999 probability of structural failure and substantial at 0.01 probability.
NASA Technical Reports Server (NTRS)
Schull, M. A.; Knyazikhin, Y.; Xu, L.; Samanta, A.; Carmona, P. L.; Lepine, L.; Jenkins, J. P.; Ganguly, S.; Myneni, R. B.
2011-01-01
Many studies have been conducted to demonstrate the ability of hyperspectral data to discriminate plant dominant species. Most of them have employed the use of empirically based techniques, which are site specific, requires some initial training based on characteristics of known leaf and/or canopy spectra and therefore may not be extendable to operational use or adapted to changing or unknown land cover. In this paper we propose a physically based approach for separation of dominant forest type using hyperspectral data. The radiative transfer theory of canopy spectral invariants underlies the approach, which facilitates parameterization of the canopy reflectance in terms of the leaf spectral scattering and two spectrally invariant and structurally varying variables - recollision and directional escape probabilities. The methodology is based on the idea of retrieving spectrally invariant parameters from hyperspectral data first, and then relating their values to structural characteristics of three-dimensional canopy structure. Theoretical and empirical analyses of ground and airborne data acquired by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over two sites in New England, USA, suggest that the canopy spectral invariants convey information about canopy structure at both the macro- and micro-scales. The total escape probability (one minus recollision probability) varies as a power function with the exponent related to the number of nested hierarchical levels present in the pixel. Its base is a geometrical mean of the local total escape probabilities and accounts for the cumulative effect of canopy structure over a wide range of scales. The ratio of the directional to the total escape probability becomes independent of the number of hierarchical levels and is a function of the canopy structure at the macro-scale such as tree spatial distribution, crown shape and size, within-crown foliage density and ground cover. These properties allow for the natural separation of dominant forest classes based on the location of points on the total escape probability vs the ratio log-log plane.
1981-05-15
Crane. is capable of imagining unicorns -- and we expect he is -- why does he find it relatively difficult to imagine himself avoiding a 30 minute...probability that the plan will succeed and to evaluate the risk of various causes of failure . We have suggested that the construction of scenarios is...expect that events will unfold as planned. However, the cumulative probability of at least one fatal failure could be overwhelmingly high even when
Huízar-Hernández, Víctor; Arredondo, Armando; Caballero, Marta; Castro-Ríos, Angélica; Flores-Hernández, Sergio; Pérez-Padilla, Rogelio; Reyes-Morales, Hortensia
2017-04-01
The aim of the study was to analyze, using a decision analysis approach, the probability of severity of illness due to delayed utilization of health services and inappropriate hospital medical treatment during the 2009 AH1N1 influenza epidemic in Mexico. Patients with influenza AH1N1 confirmed by the polymerase chain reaction (PCR) test from two hospitals in Mexico City, were included. Path methodology based upon literature and validated by clinical experts was followed. The probability for severe illness originated from delayed utilization of health services, delayed prescription of neuraminidase inhibitors (NAIs) and inappropriate use of antibiotics was assessed. Ninety-nine patients were analyzed, and 16% developed severe illness. Most patients received NAIs and 85.9% received antibiotics. Inappropriate use of antibiotics was observed in 70.7% of cases. Early utilization of services increased the likelihood of non-severe illness (cumulative probability CP = 0.56). The major cumulative probability for severe illness was observed when prescription of NAIs was delayed (CP = 0.19). Delayed prescription of NAIs and irrational use of antibiotics are critical decisions for unfavorable outcomes in patients suffering influenza AH1N1. Copyright © 2017 IMSS. Published by Elsevier Inc. All rights reserved.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Prospect evaluation as a function of numeracy and probability denominator.
Millroth, Philip; Juslin, Peter
2015-05-01
This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Zahnle, Kevin J.; Catling, D. C.
2013-01-01
Volatile escape is the classic existential problem of planetary atmospheres. The problem has gained new currency now that we can study the cumulative effects of escape from extrasolar planets. Escape itself is likely to be a rapid process, relatively unlikely to be caught in the act, but the cumulative effects of escape in particular, the distinction between planets with and without atmospheres should show up in the statistics of the new planets. The new planets make a moving target. It can be difficult to keep up, and every day the paper boy brings more. Of course most of these will be giant planets loosely resembling Saturn or Neptune albeit hotter and nearer their stars, as big hot fast-orbiting exoplanets are the least exceedingly difficult to discover. But they are still planets, all in all, and although twenty years ago experts could prove on general principles that they did not exist, we have come round rather quickly, and they should be welcome now at LPSC. Here we will discuss the empirical division between planets with and without atmospheres. For most exoplanets the question of whether a planet has or has not an atmosphere is a fuzzy inference based on the planet's bulk density. A probably safe presumption is that a low density planet is one with abundant volatiles, in the general mold of Saturn or Neptune. On the other hand a high density low mass planet could be volatile-poor, in the general mold of Earth or Mercury. We will focus on planets, mostly seen in transit, for which both radius and mass are measured, as these are the planets with measured densities. More could be said: a lot of subtle recent work has been devoted to determining the composition of planets from equations of state or directly observing atmospheres in transit, but we will not go there. What interests us here is that, from the first, the transiting extrasolar planets appear to have fit into a pattern already seen in our own Solar System, as shown in Fig. 1. We first noticed this in 2004 when there were just two transiting exoplanets to consider. The trend was well-defined by late 2007. Figure 1 shows how matters stood in Dec 2012 with approx.240 exoplanets. The figure shows that the boundary between planets with and without active volatiles - the cosmic shoreline, as it were - is both well-defined and follows a power law.
Surface slip during large Owens Valley earthquakes
Haddon, E.K.; Amos, C.B.; Zielke, O.; Jayko, Angela S.; Burgmann, R.
2016-01-01
The 1872 Owens Valley earthquake is the third largest known historical earthquake in California. Relatively sparse field data and a complex rupture trace, however, inhibited attempts to fully resolve the slip distribution and reconcile the total moment release. We present a new, comprehensive record of surface slip based on lidar and field investigation, documenting 162 new measurements of laterally and vertically displaced landforms for 1872 and prehistoric Owens Valley earthquakes. Our lidar analysis uses a newly developed analytical tool to measure fault slip based on cross-correlation of sublinear topographic features and to produce a uniquely shaped probability density function (PDF) for each measurement. Stacking PDFs along strike to form cumulative offset probability distribution plots (COPDs) highlights common values corresponding to single and multiple-event displacements. Lateral offsets for 1872 vary systematically from ∼1.0 to 6.0 m and average 3.3 ± 1.1 m (2σ). Vertical offsets are predominantly east-down between ∼0.1 and 2.4 m, with a mean of 0.8 ± 0.5 m. The average lateral-to-vertical ratio compiled at specific sites is ∼6:1. Summing displacements across subparallel, overlapping rupture traces implies a maximum of 7–11 m and net average of 4.4 ± 1.5 m, corresponding to a geologic Mw ∼7.5 for the 1872 event. We attribute progressively higher-offset lateral COPD peaks at 7.1 ± 2.0 m, 12.8 ± 1.5 m, and 16.6 ± 1.4 m to three earlier large surface ruptures. Evaluating cumulative displacements in context with previously dated landforms in Owens Valley suggests relatively modest rates of fault slip, averaging between ∼0.6 and 1.6 mm/yr (1σ) over the late Quaternary.
NASA Astrophysics Data System (ADS)
Liland, Kristian Hovde; Snipen, Lars
When a series of Bernoulli trials occur within a fixed time frame or limited space, it is often interesting to assess if the successful outcomes have occurred completely at random, or if they tend to group together. One example, in genetics, is detecting grouping of genes within a genome. Approximations of the distribution of successes are possible, but they become inaccurate for small sample sizes. In this article, we describe the exact distribution of time between random, non-overlapping successes in discrete time of fixed length. A complete description of the probability mass function, the cumulative distribution function, mean, variance and recurrence relation is included. We propose an associated test for the over-representation of short distances and illustrate the methodology through relevant examples. The theory is implemented in an R package including probability mass, cumulative distribution, quantile function, random number generator, simulation functions, and functions for testing.
The Tail Exponent for Stock Returns in Bursa Malaysia for 2003-2008
NASA Astrophysics Data System (ADS)
Rusli, N. H.; Gopir, G.; Usang, M. D.
2010-07-01
A developed discipline of econophysics that has been introduced is exhibiting the application of mathematical tools that are usually applied to the physical models for the study of financial models. In this study, an analysis of the time series behavior of several blue chip and penny stock companies in Main Market of Bursa Malaysia has been performed. Generally, the basic quantity being used is the relative price changes or is called the stock price returns, contains daily-sampled data from the beginning of 2003 until the end of 2008, containing 1555 trading days recorded. The aim of this paper is to investigate the tail exponent in tails of the distribution for blue chip stocks and penny stocks financial returns in six years period. By using a standard regression method, it is found that the distribution performed double scaling on the log-log plot of the cumulative probability of the normalized returns. Thus we calculate α for a small scale return as well as large scale return. Based on the result obtained, it is found that the power-law behavior for the probability density functions of the stock price absolute returns P(z)˜z-α with values lying inside and outside the Lévy stable regime with values α>2. All the results were discussed in detail.
Gariepy, Aileen M; Creinin, Mitchell D; Smith, Kenneth J; Xu, Xiao
2014-08-01
To compare the expected probability of pregnancy after hysteroscopic versus laparoscopic sterilization based on available data using decision analysis. We developed an evidence-based Markov model to estimate the probability of pregnancy over 10 years after three different female sterilization procedures: hysteroscopic, laparoscopic silicone rubber band application and laparoscopic bipolar coagulation. Parameter estimates for procedure success, probability of completing follow-up testing and risk of pregnancy after different sterilization procedures were obtained from published sources. In the base case analysis at all points in time after the sterilization procedure, the initial and cumulative risk of pregnancy after sterilization is higher in women opting for hysteroscopic than either laparoscopic band or bipolar sterilization. The expected pregnancy rates per 1000 women at 1 year are 57, 7 and 3 for hysteroscopic sterilization, laparoscopic silicone rubber band application and laparoscopic bipolar coagulation, respectively. At 10 years, the cumulative pregnancy rates per 1000 women are 96, 24 and 30, respectively. Sensitivity analyses suggest that the three procedures would have an equivalent pregnancy risk of approximately 80 per 1000 women at 10 years if the probability of successful laparoscopic (band or bipolar) sterilization drops below 90% and successful coil placement on first hysteroscopic attempt increases to 98% or if the probability of undergoing a hysterosalpingogram increases to 100%. Based on available data, the expected population risk of pregnancy is higher after hysteroscopic than laparoscopic sterilization. Consistent with existing contraceptive classification, future characterization of hysteroscopic sterilization should distinguish "perfect" and "typical" use failure rates. Pregnancy probability at 1 year and over 10 years is expected to be higher in women having hysteroscopic as compared to laparoscopic sterilization. Copyright © 2014 Elsevier Inc. All rights reserved.
Diagnostic testing for coagulopathies in patients with ischemic stroke.
Bushnell, C D; Goldstein, L B
2000-12-01
Hypercoagulable states are a recognized, albeit uncommon, etiology of ischemic stroke. It is unclear how often the results of specialized coagulation tests affect management. Using data compiled from a systematic review of available studies, we employed quantitative methodology to assess the diagnostic yield of coagulation tests for identification of coagulopathies in ischemic stroke patients. We performed a MEDLINE search to identify controlled studies published during 1966-1999 that reported the prevalence of deficiencies of protein C, protein S, antithrombin III, plasminogen, activated protein C resistance (APCR)/factor V Leiden mutation (FVL), anticardiolipin antibodies (ACL), or lupus anticoagulant (LA) in patients with ischemic stroke. The cumulative prevalence rates (pretest probabilities) and positive likelihood ratios for all studies and for those including only patients aged =50 years were used to calculate posttest probabilities for each coagulopathy, reflecting diagnostic yield. The cumulative pretest probabilities of coagulation defects in ischemic stroke patients are as follows: LA, 3% (8% for those aged =50 years); ACL, 17% (21% for those aged =50 years); APCR/FVL, 7% (11% for those aged =50 years); and prothrombin mutation, 4.5% (5.7% for those aged =50 years). The posttest probabilities of ACL, LA, and APCR increased with increasing pretest probability, the specificity of the tests, and features of the patients' history and clinical presentation. The pretest probabilities of coagulation defects in ischemic stroke patients are low. The diagnostic yield of coagulation tests may be increased by using tests with the highest specificities and by targeting patients with clinical or historical features that increase pretest probability. Consideration of these data might lead to more rational ordering of tests and an associated cost savings.
Pötschger, Ulrike; Heinzl, Harald; Valsecchi, Maria Grazia; Mittlböck, Martina
2018-01-19
Investigating the impact of a time-dependent intervention on the probability of long-term survival is statistically challenging. A typical example is stem-cell transplantation performed after successful donor identification from registered donors. Here, a suggested simple analysis based on the exogenous donor availability status according to registered donors would allow the estimation and comparison of survival probabilities. As donor search is usually ceased after a patient's event, donor availability status is incompletely observed, so that this simple comparison is not possible and the waiting time to donor identification needs to be addressed in the analysis to avoid bias. It is methodologically unclear, how to directly address cumulative long-term treatment effects without relying on proportional hazards while avoiding waiting time bias. The pseudo-value regression technique is able to handle the first two issues; a novel generalisation of this technique also avoids waiting time bias. Inverse-probability-of-censoring weighting is used to account for the partly unobserved exogenous covariate donor availability. Simulation studies demonstrate unbiasedness and satisfying coverage probabilities of the new method. A real data example demonstrates that study results based on generalised pseudo-values have a clear medical interpretation which supports the clinical decision making process. The proposed generalisation of the pseudo-value regression technique enables to compare survival probabilities between two independent groups where group membership becomes known over time and remains partly unknown. Hence, cumulative long-term treatment effects are directly addressed without relying on proportional hazards while avoiding waiting time bias.
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Detection of non-Gaussian fluctuations in a quantum point contact.
Gershon, G; Bomze, Yu; Sukhorukov, E V; Reznikov, M
2008-07-04
An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.
Detection of Non-Gaussian Fluctuations in a Quantum Point Contact
NASA Astrophysics Data System (ADS)
Gershon, G.; Bomze, Yu.; Sukhorukov, E. V.; Reznikov, M.
2008-07-01
An experimental study of current fluctuations through a tunable transmission barrier, a quantum point contact, is reported. We measure the probability distribution function of transmitted charge with precision sufficient to extract the first three cumulants. To obtain the intrinsic quantities, corresponding to voltage-biased barrier, we employ a procedure that accounts for the response of the external circuit and the amplifier. The third cumulant, obtained with a high precision, is found to agree with the prediction for the statistics of transport in the non-Poissonian regime.
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
Probability of failure prediction for step-stress fatigue under sine or random stress
NASA Technical Reports Server (NTRS)
Lambert, R. G.
1979-01-01
A previously proposed cumulative fatigue damage law is extended to predict the probability of failure or fatigue life for structural materials with S-N fatigue curves represented as a scatterband of failure points. The proposed law applies to structures subjected to sinusoidal or random stresses and includes the effect of initial crack (i.e., flaw) sizes. The corrected cycle ratio damage function is shown to have physical significance.
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
The relationship between breast density and bone mineral density in postmenopausal women.
Buist, Diana S M; Anderson, Melissa L; Taplin, Stephen H; LaCroix, Andrea Z
2004-11-01
It is not well understood whether breast density is a marker of cumulative exposure to estrogen or a marker of recent exposure to estrogen. The authors examined the relationship between bone mineral density (BMD; a marker of lifetime estrogen exposure) and breast density. The authors conducted a cross-sectional analysis among 1800 postmenopausal women > or = 54 years. BMD data were taken from two population-based studies conducted in 1992-1993 (n = 1055) and in 1998-1999 (n = 753). The authors linked BMD data with breast density information collected as part of a mammography screening program. They used linear regression to evaluate the density relationship, adjusted for age, hormone therapy use, body mass index (BMI), and reproductive covariates. There was a small but significant negative association between BMD and breast density. The negative correlation between density measures was not explained by hormone therapy or age, and BMI was the only covariate that notably influenced the relationship. Stratification by BMI only revealed the negative correlation between bone and breast densities in women with normal BMI. There was no relationship in overweight or obese women. The same relationship was seen for all women who had never used hormone therapy, but it was not significant once stratified by BMI. BMD and breast density were not positively associated although both are independently associated with estrogen exposure. It is likely that unique organ responses obscure the relationship between the two as indicators of cumulative estrogen exposure.
Klein, Kim; Kaspers, Gertjan; Harrison, Christine J.; Beverloo, H. Berna; Reedijk, Ardine; Bongers, Mathilda; Cloos, Jacqueline; Pession, Andrea; Reinhardt, Dirk; Zimmerman, Martin; Creutzig, Ursula; Dworzak, Michael; Alonzo, Todd; Johnston, Donna; Hirsch, Betsy; Zapotocky, Michal; De Moerloose, Barbara; Fynn, Alcira; Lee, Vincent; Taga, Takashi; Tawa, Akio; Auvrignon, Anne; Zeller, Bernward; Forestier, Erik; Salgado, Carmen; Balwierz, Walentyna; Popa, Alexander; Rubnitz, Jeffrey; Raimondi, Susana; Gibson, Brenda
2015-01-01
Purpose This retrospective cohort study aimed to determine the predictive relevance of clinical characteristics, additional cytogenetic aberrations, and cKIT and RAS mutations, as well as to evaluate whether specific treatment elements were associated with outcomes in pediatric t(8;21)-positive patients with acute myeloid leukemia (AML). Patients and Methods Karyotypes of 916 pediatric patients with t(8;21)-AML were reviewed for the presence of additional cytogenetic aberrations, and 228 samples were screened for presence of cKIT and RAS mutations. Multivariable regression models were used to assess the relevance of anthracyclines, cytarabine, and etoposide during induction and overall treatment. End points were the probability of achieving complete remission, cumulative incidence of relapse (CIR), probability of event-free survival, and probability of overall survival. Results Of 838 patients included in final analyses, 92% achieved complete remission. The 5-year overall survival, event-free survival, and CIR were 74%, 58%, and 26%, respectively. cKIT mutations and RAS mutations were not significantly associated with outcome. Patients with deletions of chromosome arm 9q [del(9q); n = 104] had a lower probability of complete remission (P = .01). Gain of chromosome 4 (+4; n = 21) was associated with inferior CIR and survival (P < .01). Anthracycline doses greater than 150 mg/m2 and etoposide doses greater than 500 mg/m2 in the first induction course and high-dose cytarabine 3 g/m2 during induction were associated with better outcomes on various end points. Cumulative doses of cytarabine greater than 30 g/m2 and etoposide greater than 1,500 mg/m2 were associated with lower CIR rates and better probability of event-free survival. Conclusion Pediatric patients with t(8;21)-AML and additional del(9q) or additional +4 might not be considered at good risk. Patients with t(8;21)-AML likely benefit from protocols that have high doses of anthracyclines, etoposide, and cytarabine during induction, as well as from protocols comprising cumulative high doses of cytarabine and etoposide. PMID:26573082
Liévanos, Raoul S
2018-04-16
The California Community Environmental Health Screening Tool (CalEnviroScreen) advances research and policy pertaining to environmental health vulnerability. However, CalEnviroScreen departs from its historical foundations and comparable screening tools by no longer considering racial status as an indicator of environmental health vulnerability and predictor of cumulative pollution burden. This study used conceptual frameworks and analytical techniques from environmental health and inequality literature to address the limitations of CalEnviroScreen, especially its inattention to race-based environmental health vulnerabilities. It developed an adjusted measure of cumulative pollution burden from the CalEnviroScreen 2.0 data that facilitates multivariate analyses of the effect of neighborhood racial composition on cumulative pollution burden, net of other indicators of population vulnerability, traffic density, industrial zoning, and local and regional clustering of pollution burden. Principal component analyses produced three new measures of population vulnerability, including Latina/o cumulative disadvantage that represents the spatial concentration of Latinas/os, economic disadvantage, limited English-speaking ability, and health vulnerability. Spatial error regression analyses demonstrated that concentrations of Latinas/os, followed by Latina/o cumulative disadvantage, are the strongest demographic determinants of adjusted cumulative pollution burden. Findings have implications for research and policy pertaining to cumulative impacts and race-based environmental health vulnerabilities within and beyond California.
2018-01-01
The California Community Environmental Health Screening Tool (CalEnviroScreen) advances research and policy pertaining to environmental health vulnerability. However, CalEnviroScreen departs from its historical foundations and comparable screening tools by no longer considering racial status as an indicator of environmental health vulnerability and predictor of cumulative pollution burden. This study used conceptual frameworks and analytical techniques from environmental health and inequality literature to address the limitations of CalEnviroScreen, especially its inattention to race-based environmental health vulnerabilities. It developed an adjusted measure of cumulative pollution burden from the CalEnviroScreen 2.0 data that facilitates multivariate analyses of the effect of neighborhood racial composition on cumulative pollution burden, net of other indicators of population vulnerability, traffic density, industrial zoning, and local and regional clustering of pollution burden. Principal component analyses produced three new measures of population vulnerability, including Latina/o cumulative disadvantage that represents the spatial concentration of Latinas/os, economic disadvantage, limited English-speaking ability, and health vulnerability. Spatial error regression analyses demonstrated that concentrations of Latinas/os, followed by Latina/o cumulative disadvantage, are the strongest demographic determinants of adjusted cumulative pollution burden. Findings have implications for research and policy pertaining to cumulative impacts and race-based environmental health vulnerabilities within and beyond California. PMID:29659481
Palomba, S; Zupi, E; Russo, T; Falbo, A; Del Negro, S; Manguso, F; Marconi, D; Tolino, A; Zullo, F
2007-02-01
During the childbearing years, the standard fertility-sparing treatment for bilateral borderline ovarian tumours (BOTs) is the unilateral oophorectomy plus controlateral cystectomy. The aim of the present study was to compare the effects of two laparoscopic fertility-sparing surgical procedures for the treatment of bilateral BOTs on recurrence and fertility in young women who desire to conceive as soon as possible. Thirty-two women affected by bilateral early-stage BOTs who desired to conceive were randomized to receive bilateral cystectomy (experimental group, n=15) or oophorectomy plus controlateral cystectomy (control group, n=17). At the first recurrence after childbearing completion, each patient was treated with non-conservative standard treatment. Recurrences and reproductive events were recorded. After a follow-up period of 81 months (19 inter-quartile; 60-96 range), the cumulative pregnancy rate (CPR) (14/15 versus 9/17; P=0.003) and the cumulative probability of first pregnancy (P= 0.011) were significantly higher in the experimental than in control group. No significant (P=0.358) difference between groups was detected in cumulative probability of first recurrence. The laparoscopic bilateral cystectomy followed by non-conservative treatment performed at the first recurrence after the childbearing completion is an effective surgical strategy for patients with bilateral early-stage BOTs who desire to conceive as soon as possible.
NEWTONP - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, NEWTONP, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), can be used independently of one another. NEWTONP can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. NEWTONP calculates the probably p required to yield a given system reliability V for a k-out-of-n system. It can also be used to determine the Clopper-Pearson confidence limits (either one-sided or two-sided) for the parameter p of a Bernoulli distribution. NEWTONP can determine Bayesian probability limits for a proportion (if the beta prior has positive integer parameters). It can determine the percentiles of incomplete beta distributions with positive integer parameters. It can also determine the percentiles of F distributions and the midian plotting positions in probability plotting. NEWTONP is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. NEWTONP is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The NEWTONP program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. NEWTONP was developed in 1988.
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Effect of Individual Component Life Distribution on Engine Life Prediction
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.; Hendricks, Robert C.; Soditus, Sherry M.
2003-01-01
The effect of individual engine component life distributions on engine life prediction was determined. A Weibull-based life and reliability analysis of the NASA Energy Efficient Engine was conducted. The engine s life at a 95 and 99.9 percent probability of survival was determined based upon the engine manufacturer s original life calculations and assumed values of each of the component s cumulative life distributions as represented by a Weibull slope. The lives of the high-pressure turbine (HPT) disks and blades were also evaluated individually and as a system in a similar manner. Knowing the statistical cumulative distribution of each engine component with reasonable engineering certainty is a condition precedent to predicting the life and reliability of an entire engine. The life of a system at a given reliability will be less than the lowest-lived component in the system at the same reliability (probability of survival). Where Weibull slopes of all the engine components are equal, the Weibull slope had a minimal effect on engine L(sub 0.1) life prediction. However, at a probability of survival of 95 percent (L(sub 5) life), life decreased with increasing Weibull slope.
Representation of layer-counted proxy records as probability densities on error-free time axes
NASA Astrophysics Data System (ADS)
Boers, Niklas; Goswami, Bedartha; Ghil, Michael
2016-04-01
Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the relative changes, i.e. in the increments of the proxy values. In such cases, only the relative (non-cumulative) counting errors matter. For the example of the NGRIP records, we show that a precise estimation of these relative changes is in fact possible. References: [1] Goswami et al., Nonlin. Processes Geophys. (2014) [2] Svensson et al., Clim. Past (2008)
Urban, Jillian E.; Davenport, Elizabeth M.; Golman, Adam J.; Maldjian, Joseph A.; Whitlow, Christopher T.; Powers, Alexander K.; Stitzel, Joel D.
2015-01-01
Sports-related concussion is the most common athletic head injury with football having the highest rate among high school athletes. Traditionally, research on the biomechanics of football-related head impact has been focused at the collegiate level. Less research has been performed at the high school level, despite the incidence of concussion among high school football players. The objective of this study is to twofold: to quantify the head impact exposure in high school football, and to develop a cumulative impact analysis method. Head impact exposure was measured by instrumenting the helmets of 40 high school football players with helmet mounted accelerometer arrays to measure linear and rotational acceleration. A total of 16,502 head impacts were collected over the course of the season. Biomechanical data were analyzed by team and by player. The median impact for each player ranged from 15.2 to 27.0 g with an average value of 21.7 (±2.4) g. The 95th percentile impact for each player ranged from 38.8 to 72.9 g with an average value of 56.4 (±10.5) g. Next, an impact exposure metric utilizing concussion injury risk curves was created to quantify cumulative exposure for each participating player over the course of the season. Impacts were weighted according to the associated risk due to linear acceleration and rotational acceleration alone, as well as the combined probability (CP) of injury associated with both. These risks were summed over the course of a season to generate risk weighted cumulative exposure. The impact frequency was found to be greater during games compared to practices with an average number of impacts per session of 15.5 and 9.4, respectively. However, the median cumulative risk weighted exposure based on combined probability was found to be greater for practices vs. games. These data will provide a metric that may be used to better understand the cumulative effects of repetitive head impacts, injury mechanisms, and head impact exposure of athletes in football. PMID:23864337
Goerg, Georg M.
2015-01-01
I present a parametric, bijective transformation to generate heavy tail versions of arbitrary random variables. The tail behavior of this heavy tail Lambert W × F X random variable depends on a tail parameter δ ≥ 0: for δ = 0, Y ≡ X, for δ > 0 Y has heavier tails than X. For X being Gaussian it reduces to Tukey's h distribution. The Lambert W function provides an explicit inverse transformation, which can thus remove heavy tails from observed data. It also provides closed-form expressions for the cumulative distribution (cdf) and probability density function (pdf). As a special case, these yield analytic expression for Tukey's h pdf and cdf. Parameters can be estimated by maximum likelihood and applications to S&P 500 log-returns demonstrate the usefulness of the presented methodology. The R package LambertW implements most of the introduced methodology and is publicly available on CRAN. PMID:26380372
Comparison of cumulant expansion and q-space imaging estimates for diffusional kurtosis in brain.
Mohanty, Vaibhav; McKinnon, Emilie T; Helpern, Joseph A; Jensen, Jens H
2018-05-01
To compare estimates for the diffusional kurtosis in brain as obtained from a cumulant expansion (CE) of the diffusion MRI (dMRI) signal and from q-space (QS) imaging. For the CE estimates of the kurtosis, the CE was truncated to quadratic order in the b-value and fit to the dMRI signal for b-values from 0 up to 2000s/mm 2 . For the QS estimates, b-values ranging from 0 up to 10,000s/mm 2 were used to determine the diffusion displacement probability density function (dPDF) via Stejskal's formula. The kurtosis was then calculated directly from the second and fourth order moments of the dPDF. These two approximations were studied for in vivo human data obtained on a 3T MRI scanner using three orthogonal diffusion encoding directions. The whole brain mean values for the CE and QS kurtosis estimates differed by 16% or less in each of the considered diffusion encoding directions, and the Pearson correlation coefficients all exceeded 0.85. Nonetheless, there were large discrepancies in many voxels, particularly those with either very high or very low kurtoses relative to the mean values. Estimates of the diffusional kurtosis in brain obtained using CE and QS approximations are strongly correlated, suggesting that they encode similar information. However, for the choice of b-values employed here, there may be substantial differences, depending on the properties of the diffusion microenvironment in each voxel. Copyright © 2018 Elsevier Inc. All rights reserved.
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.
NASA Astrophysics Data System (ADS)
Melwani Daswani, M.; Kite, E. S.
2017-09-01
Chloride-bearing deposits on Mars record high-elevation lakes during the waning stages of Mars' wet era (mid-Noachian to late Hesperian). The water source pathways, seasonality, salinity, depth, lifetime, and paleoclimatic drivers of these widespread lakes are all unknown. Here we combine reaction-transport modeling, orbital spectroscopy, and new volume estimates from high-resolution digital terrain models, in order to constrain the hydrologic boundary conditions for forming the chlorides. Considering a T = 0°C system, we find that (1) individual lakes were >100 m deep and lasted decades or longer; (2) if volcanic degassing was the source of chlorine, then the water-to-rock ratio or the total water volume were probably low, consistent with brief excursions above the melting point and/or arid climate; (3) if the chlorine source was igneous chlorapatite, then Cl-leaching events would require a (cumulative) time of >10 years at the melting point; and (4) Cl masses, divided by catchment area, give column densities 0.1-50 kg Cl/m2, and these column densities bracket the expected chlorapatite-Cl content for a seasonally warm active layer. Deep groundwater was not required. Taken together, our results are consistent with Mars having a usually cold, horizontally segregated hydrosphere by the time chlorides formed.
Modified Spectral Fatigue Methods for S-N Curves With MIL-HDBK-5J Coefficients
NASA Technical Reports Server (NTRS)
Irvine, Tom; Larsen, Curtis
2016-01-01
The rainflow method is used for counting fatigue cycles from a stress response time history, where the fatigue cycles are stress-reversals. The rainflow method allows the application of Palmgren-Miner's rule in order to assess the fatigue life of a structure subject to complex loading. The fatigue damage may also be calculated from a stress response power spectral density (PSD) using the semi-empirical Dirlik, Single Moment, Zhao-Baker and other spectral methods. These methods effectively assume that the PSD has a corresponding time history which is stationary with a normal distribution. This paper shows how the probability density function for rainflow stress cycles can be extracted from each of the spectral methods. This extraction allows for the application of the MIL-HDBK-5J fatigue coefficients in the cumulative damage summation. A numerical example is given in this paper for the stress response of a beam undergoing random base excitation, where the excitation is applied separately by a time history and by its corresponding PSD. The fatigue calculation is performed in the time domain, as well as in the frequency domain via the modified spectral methods. The result comparison shows that the modified spectral methods give comparable results to the time domain rainflow counting method.
Compositional evolution of the zoned calcalkaline magma chamber of Mount Mazama, Crater Lake, Oregon
Bacon, C.R.; Druitt, T.H.
1988-01-01
The climactic eruption of Mount Mazama has long been recognized as a classic example of rapid eruption of a substantial fraction of a zoned magma body. Increased knowledge of eruptive history and new chemical analyses of ???350 wholerock and glass samples of the climactic ejecta, preclimactic rhyodacite flows and their inclusions, postcaldera lavas, and lavas of nearby monogenetic vents are used here to infer processes of chemical evolution of this late Pleistocene - Holocene magmatic system. The 6845??50 BP climactic eruption vented ???50 km3 of magma to form: (1) rhyodacite fall deposit; (2) welded rhyodacite ignimbrite; and (3) lithic breccia and zoned ignimbrite, these during collapse of Crater Lake caldera. Climactic ejecta were dominantly homogeneous rhyodacite (70.4??0.3% SiO2), followed by subordinate andesite and cumulate scoriae (48-61% SiO2). The gap in wholerock composition reflects mainly a step in crystal content because glass compositions are virtually continuous. Two types of scoriae are distinguished by different LREE, Rb, Th, and Zr, but principally by a twofold contrast in Sr content: High-Sr (HSr) and low-Sr (LSr) scoriae. HSr scoriae were erupted first. Trace element abundances indicate that HSr and LSr scoriae had different calcalkaline andesite parents; basalt was parental to some mafic cumulate scoriae. Parental magma compositions reconstructed from scoria wholerock and glass data are similar to those of inclusions in preclimactic rhyodacites and of aphyric lavas of nearby monogenetic vents. Preclimactic rhyodacite flows and their magmatic inclusions give insight into evolution of the climactic chamber. Evolved rhyodacite flows containing LSr andesite inclusions were emplaced between ???30000 and ???25000 BP. At 7015??45 BP, the Llao Rock vent produced a zoned rhyodacite pumice fall, then rhyodacite lava with HSr andesite inclusions. The Cleetwood rhyodacite flow, emplaced immediately before the climactic eruption and compositionally identical to climactic rhyodacite (volatile-free), contains different HSr inclusions from Llao Rock. The change from LSr to HSr inclusions indicates replenishment of the chamber with andesite magma, perhaps several times, in the latest Pleistocene to early Holocene. Modeling calculations and wholerock-glass relations suggest than: (1) magmas were derived mainly by crystallization differentiation of andesite liquid; (2) evolved preclimactic rhyodacite probably was derived from LSr andesite; (3) rhyodacites contain a minor component of partial melt from wall rocks, and (4) climactic and compositionally similar rhyodacites probably formed by mixing of evolved rhyodacite with HSr derivative liquid(s) after replenishment of the chamber with HSr andesite magma. Density considerations permit a model for growth and evolution of the chamber in which andesite recharge magma ponded repeatedly between cumulates and rhyodacite magma. Convective cooling of this andesite resulted in rapid crystallization and upward escape of buoyant derivative liquid which mixed with overlying, convecting rhyodacite. The evolved rhyodacites were erupted early in the chamber's history and(or) near its margins. Postcaldera andesite lavas may be hybrids composed of LSr cumulates mixed with remnant climactic rhyodacite. Younger postcaldera rhyodacite probably formed by fractionation of similar andesite and assimilation of partial melts of wallrocks. Uniformity of climactic rhyodacite suggests homogeneous silicic ejecta from other volcanoes resulted from similar replenishment-driven convective mixing. Calcalkaline pluton compositions and their internal zonation can be interpreted in terms of the Mazama system frozen at various times in its history. ?? 1988 Springer-Verlag.
Cumulative hazard: The case of nuisance flooding
NASA Astrophysics Data System (ADS)
Moftakhari, Hamed R.; AghaKouchak, Amir; Sanders, Brett F.; Matthew, Richard A.
2017-02-01
The cumulative cost of frequent events (e.g., nuisance floods) over time may exceed the costs of the extreme but infrequent events for which societies typically prepare. Here we analyze the likelihood of exceedances above mean higher high water and the corresponding property value exposure for minor, major, and extreme coastal floods. Our results suggest that, in response to sea level rise, nuisance flooding (NF) could generate property value exposure comparable to, or larger than, extreme events. Determining whether (and when) low cost, nuisance incidents aggregate into high cost impacts and deciding when to invest in preventive measures are among the most difficult decisions for policymakers. It would be unfortunate if efforts to protect societies from extreme events (e.g., 0.01 annual probability) left them exposed to a cumulative hazard with enormous costs. We propose a Cumulative Hazard Index (CHI) as a tool for framing the future cumulative impact of low cost incidents relative to infrequent extreme events. CHI suggests that in New York, NY, Washington, DC, Miami, FL, San Francisco, CA, and Seattle, WA, a careful consideration of socioeconomic impacts of NF for prioritization is crucial for sustainable coastal flood risk management.
Estimating loblolly pine size-density trajectories across a range of planting densities
Curtis L. VanderSchaaf; Harold E. Burkhart
2013-01-01
Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...
ERIC Educational Resources Information Center
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…
NASA Astrophysics Data System (ADS)
Scrimgeour, Garry J.; Hvenegaard, Paul J.; Tchir, John
2008-12-01
We evaluated the cumulative effects of land use disturbance resulting from forest harvesting, and exploration and extraction of oil and gas resources on the occurrence and structure of stream fish assemblages in the Kakwa and Simonette watersheds in Alberta, Canada. Logistic regression models showed that the occurrence of numerically dominant species in both watersheds was related to two metrics defining industrial activity (i.e., percent disturbance and road density), in addition to stream wetted width, elevation, reach slope, and percent fines. Occurrences of bull trout, slimy sculpin, and white sucker were negatively related to percent disturbance and that of Arctic grayling, and mountain whitefish were positively related to percent disturbance and road density. Assessments of individual sites showed that 76% of the 74 and 46 test sites in the Kakwa and Simonette watersheds were possibly impaired or impaired. Impaired sites in the Kakwa Watershed supported lower densities of bull trout, mountain whitefish, and rainbow trout, but higher densities of Arctic grayling compared to appropriate reference sites. Impaired sites in the Simonette Watershed supported lower densities of bull trout, but higher densities of lake chub compared to reference sites. Our data suggest that current levels of land use disturbance alters the occurrence and structure of stream fish assemblages.
Sung, Joohon; Song, Yun-Mi; Stone, Jennifer; Lee, Kayoung
2011-09-01
Mammographic density is one of the strong risk factors for breast cancer. A potential mechanism for this association is that cumulative exposure to mammographic density may reflect cumulative exposure to hormones that stimulate cell division in breast stroma and epithelium, which may have corresponding effects on breast cancer development. Bone mineral density (BMD), a marker of lifetime estrogen exposure, has been found to be associated with breast cancer. We examined the association between BMD and mammographic density in a Korean population. Study subjects were 730 Korean women selected from the Healthy Twin study. BMD (g/cm(2)) was measured with dual-energy X-ray absorptiometry. Mammographic density was measured from digital mammograms using a computer-assisted thresholding method. Linear mixed model considering familial correlations and a wide range of covariates was used for analyses. Quantitative genetic analysis was completed using SOLAR. In premenopausal women, positive associations existed between absolute dense area and BMD at ribs, pelvis, and legs, and between percent dense area and BMD at pelvis and legs. However, in postmenopausal women, there was no association between BMD at any site and mammographic density measures. An evaluation of additive genetic cross-trait correlation showed that absolute dense area had a weak-positive additive genetic cross-trait correlation with BMD at ribs and spines after full adjustment of covariates. This finding suggests that the association between mammographic density and breast cancer could, at least in part, be attributable to an estrogen-related hormonal mechanism.
Kawamura, Etsushi; Habu, Daiki; Hayashi, Takehiro; Oe, Ai; Kotani, Jin; Ishizu, Hirotaka; Torii, Kenji; Kawabe, Joji; Fukushima, Wakaba; Tanaka, Takashi; Nishiguchi, Shuhei; Shiomi, Susumu
2005-01-01
AIM: To examine the correlation between the porto-systemic hypertension evaluated by portal shunt index (PSI) and life-threatening complications, including hepatocellular carcinoma (HCC), liver failure (Child-Pugh stage progression), and esophagogastric varices. METHODS: Two hundred and twelve consecutive subjects with HCV-related cirrhosis (LC-C) underwent per-rectal portal scintigraphy. They were allocated into three groups according to their PSI: group I, PSI ≤ 10%; group II, 10%
Van Cauwenberg, Jelle; Clarys, Peter; De Bourdeaudhuij, Ilse; Van Holle, Veerle; Verté, Dominique; De Witte, Nico; De Donder, Liesbeth; Buffel, Tine; Dury, Sarah; Deforche, Benedicte
2013-08-14
The physical environment may play a crucial role in promoting older adults' walking for transportation. However, previous studies on relationships between the physical environment and older adults' physical activity behaviors have reported inconsistent findings. A possible explanation for these inconsistencies is the focus upon studying environmental factors separately rather than simultaneously. The current study aimed to investigate the cumulative influence of perceived favorable environmental factors on older adults' walking for transportation. Additionally, the moderating effect of perceived distance to destinations on this relationship was studied. The sample was comprised of 50,685 non-institutionalized older adults residing in Flanders (Belgium). Cross-sectional data on demographics, environmental perceptions and frequency of walking for transportation were collected by self-administered questionnaires in the period 2004-2010. Perceived distance to destinations was categorized into short, medium, and large distance to destinations. An environmental index (=a sum of favorable environmental factors, ranging from 0 to 7) was constructed to investigate the cumulative influence of favorable environmental factors. Multilevel logistic regression analyses were applied to predict probabilities of daily walking for transportation. For short distance to destinations, probability of daily walking for transportation was significantly higher when seven compared to three, four or five favorable environmental factors were present. For medium distance to destinations, probabilities significantly increased for an increase from zero to four favorable environmental factors. For large distance to destinations, no relationship between the environmental index and walking for transportation was observed. Our findings suggest that the presence of multiple favorable environmental factors can motivate older adults to walk medium distances to facilities. Future research should focus upon the relationship between older adults' physical activity and multiple environmental factors simultaneously instead of separately.
Cumulative Probability and Time to Reintubation in U.S. ICUs.
Miltiades, Andrea N; Gershengorn, Hayley B; Hua, May; Kramer, Andrew A; Li, Guohua; Wunsch, Hannah
2017-05-01
Reintubation after liberation from mechanical ventilation is viewed as an adverse event in ICUs. We sought to describe the frequency of reintubations across U.S. ICUs and to propose a standard, appropriate time cutoff for reporting of reintubation events. We conducted a cohort study using data from the Project IMPACT database of 185 diverse ICUs in the United States. We included patients who received mechanical ventilation and excluded patients who received a tracheostomy, had a do-not-resuscitate order placed, or died prior to first extubation. We assessed the percentage of patients extubated who were reintubated; the cumulative probability of reintubation, with death and do-not-resuscitate orders after extubation modeled as competing risks, and time to reintubation. Among 98,367 patients who received mechanical ventilation without death or tracheostomy prior to extubation, 9,907 (10.1%) were reintubated, with a cumulative probability of 10.0%. Median time to reintubation was 15 hours (interquartile range, 2-45 hr). Of patients who required reintubation in the ICU, 90% did so within the first 96 hours after initial extubation; this was consistent across various patient subtypes (89.3% for electives surgical patients up to 94.8% for trauma patients) and ICU subtypes (88.6% for cardiothoracic ICUs to 93.5% for medical ICUs). The reintubation rate for ICU patients liberated from mechanical ventilation in U.S. ICUs is approximately 10%. We propose a time cutoff of 96 hours for reintubation definitions and benchmarking efforts, as it captures 90% of ICU reintubation events. Reintubation rates can be reported as simple percentages, without regard for deaths or changes in goals of care that might occur.
2013-01-01
Background The physical environment may play a crucial role in promoting older adults’ walking for transportation. However, previous studies on relationships between the physical environment and older adults’ physical activity behaviors have reported inconsistent findings. A possible explanation for these inconsistencies is the focus upon studying environmental factors separately rather than simultaneously. The current study aimed to investigate the cumulative influence of perceived favorable environmental factors on older adults’ walking for transportation. Additionally, the moderating effect of perceived distance to destinations on this relationship was studied. Methods The sample was comprised of 50,685 non-institutionalized older adults residing in Flanders (Belgium). Cross-sectional data on demographics, environmental perceptions and frequency of walking for transportation were collected by self-administered questionnaires in the period 2004-2010. Perceived distance to destinations was categorized into short, medium, and large distance to destinations. An environmental index (=a sum of favorable environmental factors, ranging from 0 to 7) was constructed to investigate the cumulative influence of favorable environmental factors. Multilevel logistic regression analyses were applied to predict probabilities of daily walking for transportation. Results For short distance to destinations, probability of daily walking for transportation was significantly higher when seven compared to three, four or five favorable environmental factors were present. For medium distance to destinations, probabilities significantly increased for an increase from zero to four favorable environmental factors. For large distance to destinations, no relationship between the environmental index and walking for transportation was observed. Conclusions Our findings suggest that the presence of multiple favorable environmental factors can motivate older adults to walk medium distances to facilities. Future research should focus upon the relationship between older adults’ physical activity and multiple environmental factors simultaneously instead of separately. PMID:23945285
Chen, Tuo; Tang, Xiaobin; Chen, Feida; Ni, Minxuan; Huang, Hai; Zhang, Yun; Chen, Da
2017-06-26
Radiation shielding of high-energy electrons is critical for successful space missions. However, conventional passive shielding systems exhibit several limitations, such as heavy configuration, poor shielding ability, and strong secondary bremsstrahlung radiation. In this work, an aluminum/vacuum multilayer structure was proposed based on the electron return effects induced by magnetic field. The shielding property of several configurations was evaluated by using the Monte Carlo method. Results showed that multilayer systems presented improved shielding ability to electrons, and less secondary x-ray transmissions than those of conventional systems. Moreover, the influences of magnetic flux density and number of layers on the shielding property of multilayer systems were investigated using a female Chinese hybrid reference phantom based on cumulative dose. In the case of two aluminum layers, the cumulative dose in a phantom gradually decreased with increasing magnetic flux density. The maximum decline rate was found within 0.4-1 Tesla. With increasing layers of configuration, the cumulative dose decreased and the shielding ability improved. This research provides effective shielding measures for future space radiation protection in high-energy electron environments.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
Urata, Satoko; Kitagawa, Yasuhide; Matsuyama, Satoko; Naito, Renato; Yasuda, Kenji; Mizokami, Atsushi; Namiki, Mikio
2017-04-01
To optimize the rescreening schedule for men with low baseline prostate-specific antigen (PSA) levels, we evaluated men with baseline PSA levels of ≤1.0 ng/mL in PSA-based population screening. We enrolled 8086 men aged 55-69 years with baseline PSA levels of ≤1.0 ng/mL, who were screened annually. The relationships of baseline PSA and age with the cumulative risks and clinicopathological features of screening-detected cancer were investigated. Among the 8086 participants, 28 (0.35 %) and 18 (0.22 %) were diagnosed with prostate cancer and cancer with a Gleason score (GS) of ≥7 during the observation period, respectively. The cumulative probabilities of prostate cancer at 12 years were 0.42, 1.0, 3.4, and 4.3 % in men with baseline PSA levels of 0.0-0.4, 0.5-0.6, 0.7-0.8, and 0.9-1.0 ng/mL, respectively. Those with GS of ≥7 had cumulative probabilities of 0.42, 0.73, 2.8, and 1.9 %, respectively. The cumulative probabilities of prostate cancer were significantly lower when baseline PSA levels were 0.0-0.6 ng/mL compared with 0.7-1.0 ng/mL. Prostate cancer with a GS of ≥7 was not detected during the first 10 years of screening when baseline PSA levels were 0.0-0.6 ng/mL and was not detected during the first 2 years when baseline PSA levels were 0.7-1.0 ng/mL. Our study demonstrated that men with baseline PSA levels of 0.0-0.6 ng/mL might benefit from longer screening intervals than those recommended in the guidelines of the Japanese Urological Association. Further investigation is needed to confirm the optimal screening interval for men with low baseline PSA levels.
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
NASA Astrophysics Data System (ADS)
Wallmach, T.; Hatton, C. J.; De Waal, S. A.; Gibson, R. L.
1995-11-01
Two calc-silicate xenoliths in the Upper Zone of the Bushveld complex contain mineral assemblages which permit delineation of the metamorphic path followed after incorporation of the xenoliths into the magma. Peak metamorphism in these xenoliths occurred at T=1100-1200°C and P <1.5 kbar. Retrograde metamorphism, probably coinciding with the late magmatic stage, is characterized by the breakdown of akermanite to monticellite and wollastonite at 700°C and the growth of vesuvianite from melilite. The latter implies that water-rich fluids (X CO 2 <0.2) were present and probably circulating through the cooling magmatic pile. In contrast, calc-silicate xenoliths within the lower zones of the Bushveld complex, namely in the Marginal and Critical Zones, also contain melilite, monticellite and additional periclase with only rare development of vesuvianite. This suggests that the Upper Zone cumulate pile was much 'wetter' in the late-magmatic stage than the earlier-formed Critical and Marginal Zone cumulate piles.
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
A Time-Dependent Quantum Dynamics Study of the H2 + CH3 yields H + CH4 Reaction
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
We present a time-dependent wave-packet propagation calculation for the H2 + CH3 yields H + CH4 reaction in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probability for different initial rotational-vibrational states are presented in this study. The cumulative reaction probability (CRP) is obtained by summing over initial-state-selected reaction probability. The energy-shift approximation to account for the contribution of degrees of freedom missing in the 6D calculation is employed to obtain an approximate full-dimensional CRP. Thermal rate constant is compared with different experiment results.
White-nose syndrome pathology grading in Nearctic and Palearctic bats
Pikula, Jiri; Amelon, Sybill K.; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G.; Zukal, Jan; Martínková, Natália
2017-01-01
While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity. PMID:28767673
White-nose syndrome pathology grading in Nearctic and Palearctic bats.
Pikula, Jiri; Amelon, Sybill K; Bandouchova, Hana; Bartonička, Tomáš; Berkova, Hana; Brichta, Jiri; Hooper, Sarah; Kokurewicz, Tomasz; Kolarik, Miroslav; Köllner, Bernd; Kovacova, Veronika; Linhart, Petr; Piacek, Vladimir; Turner, Gregory G; Zukal, Jan; Martínková, Natália
2017-01-01
While white-nose syndrome (WNS) has decimated hibernating bat populations in the Nearctic, species from the Palearctic appear to cope better with the fungal skin infection causing WNS. This has encouraged multiple hypotheses on the mechanisms leading to differential survival of species exposed to the same pathogen. To facilitate intercontinental comparisons, we proposed a novel pathogenesis-based grading scheme consistent with WNS diagnosis histopathology criteria. UV light-guided collection was used to obtain single biopsies from Nearctic and Palearctic bat wing membranes non-lethally. The proposed scheme scores eleven grades associated with WNS on histopathology. Given weights reflective of grade severity, the sum of findings from an individual results in weighted cumulative WNS pathology score. The probability of finding fungal skin colonisation and single, multiple or confluent cupping erosions increased with increase in Pseudogymnoascus destructans load. Increasing fungal load mimicked progression of skin infection from epidermal surface colonisation to deep dermal invasion. Similarly, the number of UV-fluorescent lesions increased with increasing weighted cumulative WNS pathology score, demonstrating congruence between WNS-associated tissue damage and extent of UV fluorescence. In a case report, we demonstrated that UV-fluorescence disappears within two weeks of euthermy. Change in fluorescence was coupled with a reduction in weighted cumulative WNS pathology score, whereby both methods lost diagnostic utility. While weighted cumulative WNS pathology scores were greater in the Nearctic than Palearctic, values for Nearctic bats were within the range of those for Palearctic species. Accumulation of wing damage probably influences mortality in affected bats, as demonstrated by a fatal case of Myotis daubentonii with natural WNS infection and healing in Myotis myotis. The proposed semi-quantitative pathology score provided good agreement between experienced raters, showing it to be a powerful and widely applicable tool for defining WNS severity.
Fire frequency, area burned, and severity: A quantitative approach to defining a normal fire year
Lutz, J.A.; Key, C.H.; Kolden, C.A.; Kane, J.T.; van Wagtendonk, J.W.
2011-01-01
Fire frequency, area burned, and fire severity are important attributes of a fire regime, but few studies have quantified the interrelationships among them in evaluating a fire year. Although area burned is often used to summarize a fire season, burned area may not be well correlated with either the number or ecological effect of fires. Using the Landsat data archive, we examined all 148 wildland fires (prescribed fires and wildfires) >40 ha from 1984 through 2009 for the portion of the Sierra Nevada centered on Yosemite National Park, California, USA. We calculated mean fire frequency and mean annual area burned from a combination of field- and satellite-derived data. We used the continuous probability distribution of the differenced Normalized Burn Ratio (dNBR) values to describe fire severity. For fires >40 ha, fire frequency, annual area burned, and cumulative severity were consistent in only 13 of 26 years (50 %), but all pair-wise comparisons among these fire regime attributes were significant. Borrowing from long-established practice in climate science, we defined "fire normals" to be the 26 year means of fire frequency, annual area burned, and the area under the cumulative probability distribution of dNBR. Fire severity normals were significantly lower when they were aggregated by year compared to aggregation by area. Cumulative severity distributions for each year were best modeled with Weibull functions (all 26 years, r2 ??? 0.99; P < 0.001). Explicit modeling of the cumulative severity distributions may allow more comprehensive modeling of climate-severity and area-severity relationships. Together, the three metrics of number of fires, size of fires, and severity of fires provide land managers with a more comprehensive summary of a given fire year than any single metric.
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
Storkel, Holly L.; Lee, Jaehoon; Cox, Casey
2016-01-01
Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276
Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey
2016-11-01
Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.
Turbulent flame spreading mechanisms after spark ignition
NASA Astrophysics Data System (ADS)
Subramanian, V.; Domingo, Pascale; Vervisch, Luc
2009-12-01
Numerical simulation of forced ignition is performed in the framework of Large-Eddy Simulation (LES) combined with a tabulated detailed chemistry approach. The objective is to reproduce the flame properties observed in a recent experimental work reporting probability of ignition in a laboratory-scale burner operating with Methane/air non premixed mixture [1]. The smallest scales of chemical phenomena, which are unresolved by the LES grid, are approximated with a flamelet model combined with presumed probability density functions, to account for the unresolved part of turbulent fluctuations of species and temperature. Mono-dimensional flamelets are simulated using GRI-3.0 [2] and tabulated under a set of parameters describing the local mixing and progress of reaction. A non reacting case was simulated at first, to study the unsteady velocity and mixture fields. The time averaged velocity and mixture fraction, and their respective turbulent fluctuations, are compared against the experimental measurements, in order to estimate the prediction capabilities of LES. The time history of axial and radial components of velocity and mixture fraction is cumulated and analysed for different burner regimes. Based on this information, spark ignition is mimicked on selected ignition spots and the dynamics of kernel development analyzed to be compared against the experimental observations. The possible link between the success or failure of the ignition and the flow conditions (in terms of velocity and composition) at the sparking time are then explored.
DOT National Transportation Integrated Search
1982-06-01
The purpose of this study was to apply mathematical procedures to the Federal Aviation Administration (FAA) pilot medical data to examine the feasibility of devising a linear numbering system such that (1) the cumulative probability distribution func...
NASA Astrophysics Data System (ADS)
Li, Zhanling; Li, Zhanjie; Li, Chengcheng
2014-05-01
Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to 2008, while the intensity of such flow extremes is comparatively increasing especially for the higher return levels.
Seror, Valerie
2008-05-01
Choices regarding prenatal diagnosis of Down syndrome - the most frequent chromosomal defect - are particularly relevant to decision analysis, since women's decisions are based on the assessment of their risk of carrying a child with Down syndrome, and involve tradeoffs (giving birth to an affected child vs procedure-related miscarriage). The aim of this study, based on face-to-face interviews with 78 women aged 25-35 with prior experience of pregnancy, was to compare the women' expressed choices towards prenatal diagnosis with those derived from theoretical models of choice (expected utility theory, rank-dependent theory, and cumulative prospect theory). The main finding obtained in this study was that the cumulative prospect model fitted the observed choices best: both subjective transformation of probabilities and loss aversion, which are basic features of the cumulative prospect model, have to be taken into account to make the observed choices consistent with the theoretical ones.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
Probability and surprisal in auditory comprehension of morphologically complex words.
Balling, Laura Winther; Baayen, R Harald
2012-10-01
Two auditory lexical decision experiments document for morphologically complex words two points at which the probability of a target word given the evidence shifts dramatically. The first point is reached when morphologically unrelated competitors are no longer compatible with the evidence. Adapting terminology from Marslen-Wilson (1984), we refer to this as the word's initial uniqueness point (UP1). The second point is the complex uniqueness point (CUP) introduced by Balling and Baayen (2008), at which morphologically related competitors become incompatible with the input. Later initial as well as complex uniqueness points predict longer response latencies. We argue that the effects of these uniqueness points arise due to the large surprisal (Levy, 2008) carried by the phonemes at these uniqueness points, and provide independent evidence that how cumulative surprisal builds up in the course of the word co-determines response latencies. The presence of effects of surprisal, both at the initial uniqueness point of complex words, and cumulatively throughout the word, challenges the Shortlist B model of Norris and McQueen (2008), and suggests that a Bayesian approach to auditory comprehension requires complementation from information theory in order to do justice to the cognitive cost of updating probability distributions over lexical candidates. Copyright © 2012 Elsevier B.V. All rights reserved.
Representation of analysis results involving aleatory and epistemic uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis
2008-08-01
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for themore » representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.« less
Deep Learning Role in Early Diagnosis of Prostate Cancer
Reda, Islam; Khalil, Ashraf; Elmogy, Mohammed; Abou El-Fetouh, Ahmed; Shalaby, Ahmed; Abou El-Ghar, Mohamed; Elmaghraby, Adel; Ghazal, Mohammed; El-Baz, Ayman
2018-01-01
The objective of this work is to develop a computer-aided diagnostic system for early diagnosis of prostate cancer. The presented system integrates both clinical biomarkers (prostate-specific antigen) and extracted features from diffusion-weighted magnetic resonance imaging collected at multiple b values. The presented system performs 3 major processing steps. First, prostate delineation using a hybrid approach that combines a level-set model with nonnegative matrix factorization. Second, estimation and normalization of diffusion parameters, which are the apparent diffusion coefficients of the delineated prostate volumes at different b values followed by refinement of those apparent diffusion coefficients using a generalized Gaussian Markov random field model. Then, construction of the cumulative distribution functions of the processed apparent diffusion coefficients at multiple b values. In parallel, a K-nearest neighbor classifier is employed to transform the prostate-specific antigen results into diagnostic probabilities. Finally, those prostate-specific antigen–based probabilities are integrated with the initial diagnostic probabilities obtained using stacked nonnegativity constraint sparse autoencoders that employ apparent diffusion coefficient–cumulative distribution functions for better diagnostic accuracy. Experiments conducted on 18 diffusion-weighted magnetic resonance imaging data sets achieved 94.4% diagnosis accuracy (sensitivity = 88.9% and specificity = 100%), which indicate the promising results of the presented computer-aided diagnostic system. PMID:29804518
Combined statistical analysis of landslide release and propagation
NASA Astrophysics Data System (ADS)
Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay
2016-04-01
Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.
Quantitative methods for analysing cumulative effects on fish migration success: a review.
Johnson, J E; Patterson, D A; Martins, E G; Cooke, S J; Hinch, S G
2012-07-01
It is often recognized, but seldom addressed, that a quantitative assessment of the cumulative effects, both additive and non-additive, of multiple stressors on fish survival would provide a more realistic representation of the factors that influence fish migration. This review presents a compilation of analytical methods applied to a well-studied fish migration, a more general review of quantitative multivariable methods, and a synthesis on how to apply new analytical techniques in fish migration studies. A compilation of adult migration papers from Fraser River sockeye salmon Oncorhynchus nerka revealed a limited number of multivariable methods being applied and the sub-optimal reliance on univariable methods for multivariable problems. The literature review of fisheries science, general biology and medicine identified a large number of alternative methods for dealing with cumulative effects, with a limited number of techniques being used in fish migration studies. An evaluation of the different methods revealed that certain classes of multivariable analyses will probably prove useful in future assessments of cumulative effects on fish migration. This overview and evaluation of quantitative methods gathered from the disparate fields should serve as a primer for anyone seeking to quantify cumulative effects on fish migration survival. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.
Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyż, W.; Zalewski, K.
2005-10-01
It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
On Connected Diagrams and Cumulants of Erdős-Rényi Matrix Models
NASA Astrophysics Data System (ADS)
Khorunzhiy, O.
2008-08-01
Regarding the adjacency matrices of n-vertex graphs and related graph Laplacian we introduce two families of discrete matrix models constructed both with the help of the Erdős-Rényi ensemble of random graphs. Corresponding matrix sums represent the characteristic functions of the average number of walks and closed walks over the random graph. These sums can be considered as discrete analogues of the matrix integrals of random matrix theory. We study the diagram structure of the cumulant expansions of logarithms of these matrix sums and analyze the limiting expressions as n → ∞ in the cases of constant and vanishing edge probabilities.
Microdose Induced Drain Leakage Effects in Power Trench MOSFETs: Experiment and Modeling
NASA Astrophysics Data System (ADS)
Zebrev, Gennady I.; Vatuev, Alexander S.; Useinov, Rustem G.; Emeliyanov, Vladimir V.; Anashin, Vasily S.; Gorbunov, Maxim S.; Turin, Valentin O.; Yesenkov, Kirill A.
2014-08-01
We study experimentally and theoretically the micro-dose induced drain-source leakage current in the trench power MOSFETs under irradiation with high-LET heavy ions. We found experimentally that cumulative increase of leakage current occurs by means of stochastic spikes corresponding to a strike of single heavy ion into the MOSFET gate oxide. We simulate this effect with the proposed analytic model allowing to describe (including Monte Carlo methods) both the deterministic (cumulative dose) and stochastic (single event) aspects of the problem. Based on this model the survival probability assessment in space heavy ion environment with high LETs was proposed.
NASA Astrophysics Data System (ADS)
Gori-Giorgi, Paola; Ziesche, Paul
2002-12-01
The momentum distribution of the unpolarized uniform electron gas in its Fermi-liquid regime, n(k,rs), with the momenta k measured in units of the Fermi wave number kF and with the density parameter rs, is constructed with the help of the convex Kulik function G(x). It is assumed that n(0,rs),n(1±,rs), the on-top pair density g(0,rs), and the kinetic energy t(rs) are known (respectively, from accurate calculations for rs=1,…,5, from the solution of the Overhauser model, and from quantum Monte Carlo calculations via the virial theorem). Information from the high- and the low-density limit, corresponding to the random-phase approximation and to the Wigner crystal limit, is used. The result is an accurate parametrization of n(k,rs), which fulfills most of the known exact constraints. It is in agreement with the effective-potential calculations of Takada and Yasuhara [Phys. Rev. B 44, 7879 (1991)], is compatible with quantum Monte Carlo data, and is valid in the density range rs≲12. The corresponding cumulant expansions of the pair density and of the static structure factor are discussed, and some exact limits are derived.
Gravity modeling finds a large magma body in the deep crust below the Gulf of Naples, Italy.
Fedi, M; Cella, F; D'Antonio, M; Florio, G; Paoletti, V; Morra, V
2018-05-29
We analyze a wide gravity low in the Campania Active Volcanic Area and interpret it by a large and deep source distribution of partially molten, low-density material from about 8 to 30 km depth. Given the complex spatial-temporal distribution of explosive volcanism in the area, we model the gravity data consistently with several volcanological and petrological constraints. We propose two possible models: one accounts for the coexistence, within the lower/intermediate crust, of large amounts of melts and cumulates besides country rocks. It implies a layered distribution of densities and, thus, a variation with depth of percentages of silicate liquids, cumulates and country rocks. The other reflects a fractal density distribution, based on the scaling exponent estimated from the gravity data. According to this model, the gravity low would be related to a distribution of melt pockets within solid rocks. Both density distributions account for the available volcanological and seismic constraints and can be considered as end-members of possible models compatible with gravity data. Such results agree with the general views about the roots of large areas of ignimbritic volcanism worldwide. Given the prolonged history of magmatism in the Campania area since Pliocene times, we interpret the detected low-density body as a developing batholith.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerreiro, F; Janssens, G; Seravalli, E
Purpose: To investigate the dosimetric impact of daily changes in patient’s diameter, due to weight gain/loss and air in the bowel, based on CBCT information during radiotherapy treatment of pediatric abdominal tumors. Methods: 10 pediatric patients with neuroblastoma (n=6) and Wilms’ (n=4) tumors were included. Available CBCTs were affinely registered to the planning CT for daily set-up variations corrections. A density override approach assigning air-density to the random air pockets and water-density to the remaining anatomy was used to determine the CBCT and CT dose. Clinical VMAT plans, with a PTV prescribed dose ranging between (14.4- 36) Gy, were re-optimizedmore » on the density override CT and re-calculated on each CBCT. Dose-volume statistics of the PTV and kidneys, delineated on each CBCT, were used to compare the daily and cumulative CBCT dose with the reference CT dose. Results: The average patient diameter variation was (0.5 ± 0.7) cm (maximum daily difference of 2.3 cm). The average PTV mean dose difference (MDD) between the CT and the cumulative CBCT plans was (0.1 ± 1.1) % (maximum daily MDD of 2%). A reduction in target coverage up to 3% and 7% was observed for the cumulative and daily CBCT plans, respectively. The average kidneys’ cumulative MDD was (−2.7 ± 3.6) % (maximum daily MDD of −12%), corresponding to an overdosage. Conclusion: Due to patient’s diameter changes, a target underdosage was assessed. Given the high local tumor control of neuroblastoma and Wilms’ diseases, the need of re-planning might be discarded. However, the assessed kidneys overdosage could represent a problem when the normal tissue tolerance is reached. The necessity of re-planning should then be considered to reduce the risk of long-term renal complications. Due to the poor softtissue contrast on CBCT, MRI-guidance is required to obtain a better assessment of the accumulated dose on the remaining OARs.« less
Code of Federal Regulations, 2011 CFR
2011-07-01
...) The comment period based on § 325.2(d)(2); (15) A statement that any person may request, in writing... various evaluation factors on which decisions are based shall be included in every public notice. (1... whether to issue a permit will be based on an evaluation of the probable impact including cumulative...
Code of Federal Regulations, 2014 CFR
2014-07-01
...) The comment period based on § 325.2(d)(2); (15) A statement that any person may request, in writing... various evaluation factors on which decisions are based shall be included in every public notice. (1... whether to issue a permit will be based on an evaluation of the probable impact including cumulative...
Code of Federal Regulations, 2012 CFR
2012-07-01
...) The comment period based on § 325.2(d)(2); (15) A statement that any person may request, in writing... various evaluation factors on which decisions are based shall be included in every public notice. (1... whether to issue a permit will be based on an evaluation of the probable impact including cumulative...
Code of Federal Regulations, 2013 CFR
2013-07-01
...) The comment period based on § 325.2(d)(2); (15) A statement that any person may request, in writing... various evaluation factors on which decisions are based shall be included in every public notice. (1... whether to issue a permit will be based on an evaluation of the probable impact including cumulative...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) The comment period based on § 325.2(d)(2); (15) A statement that any person may request, in writing... various evaluation factors on which decisions are based shall be included in every public notice. (1... whether to issue a permit will be based on an evaluation of the probable impact including cumulative...
The Priority Heuristic: Making Choices Without Trade-Offs
Brandstätter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph
2010-01-01
Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, we generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic predicts (i) Allais' paradox, (ii) risk aversion for gains if probabilities are high, (iii) risk seeking for gains if probabilities are low (lottery tickets), (iv) risk aversion for losses if probabilities are low (buying insurance), (v) risk seeking for losses if probabilities are high, (vi) certainty effect, (vii) possibility effect, and (viii) intransitivities. We test how accurately the heuristic predicts people's choices, compared to previously proposed heuristics and three modifications of expected utility theory: security-potential/aspiration theory, transfer-of-attention-exchange model, and cumulative prospect theory. PMID:16637767
Outside and inside noise exposure in urban and suburban areas
Dwight E. Bishop; Myles A. Simpson
1977-01-01
In urban and suburban areas of the United States (away from major airports), the outdoor noise environment usually depends strongly on local vehicular traffic. By relating traffic flow to population density, a model of outdoor noise exposure has been developed for estimating the cumulative 24-hour noise exposure based upon the population density of the area. This noise...
de Wit, R.; van den Berg, H.; Burghouts, J.; Nortier, J.; Slee, P.; Rodenburg, C.; Keizer, J.; Fonteyn, M.; Verweij, J.; Wils, J.
1998-01-01
We have reported previously that the anti-emetic efficacy of single agent 5HT3 antagonists is not maintained when analysed with the measurement of cumulative probabilities. Presently, the most effective anti-emetic regimen is a combination of a 5HT3 antagonist plus dexamethasone. We, therefore, assessed the sustainment of efficacy of such a combination in 125 patients, scheduled to receive cisplatin > or = 70 mg m(-2) either alone or in combination with other cytotoxic drugs. Anti-emetic therapy was initiated with 10 mg of dexamethasone and 3 mg of granisetron intravenously, before cisplatin. On days 1-6, patients received 8 mg of dexamethasone and 1 mg of granisetron twice daily by oral administration. Protection was assessed during all cycles and calculated based on cumulative probability analyses using the method of Kaplan-Meier and a model for transitional probabilities. Irrespective of the type of analysis used, the anti-emetic efficacy of granisetron/dexamethasone decreased over cycles. The initial complete acute emesis protection rate of 66% decreased to 30% according to the method of Kaplan-Meier and to 39% using the model for transitional probabilities. For delayed emesis, the initial complete protection rate of 52% decreased to 21% (Kaplan-Meier) and to 43% (transitional probabilities). In addition, we observed that protection failure in the delayed emesis period adversely influenced the acute emesis protection in the next cycle. We conclude that the anti-emetic efficacy of a 5HT3 antagonist plus dexamethasone is not maintained over multiple cycles of highly emetogenic chemotherapy, and that the acute emesis protection is adversely influenced by protection failure in the delayed emesis phase. PMID:9652766
Charvat, Hadrien; Sasazuki, Shizuka; Inoue, Manami; Iwasaki, Motoki; Sawada, Norie; Shimazu, Taichi; Yamaji, Taiki; Tsugane, Shoichiro
2016-01-15
Gastric cancer is a particularly important issue in Japan, where incidence rates are among the highest observed. In this work, we provide a risk prediction model allowing the estimation of the 10-year cumulative probability of gastric cancer occurrence. The study population consisted of 19,028 individuals from the Japanese Public Health Center cohort II who were followed-up from 1993 to 2009. A parametric survival model was used to assess the impact on the probability of gastric cancer of clinical and lifestyle-related risk factors in combination with serum anti-Helicobacter pylori antibody titres and pepsinogen I and pepsinogen II levels. Based on the resulting model, cumulative probability estimates were calculated and a simple risk scoring system was developed. A total of 412 cases of gastric cancer occurred during 270,854 person-years of follow-up. The final model included (besides the biological markers) age, gender, smoking status, family history of gastric cancer and consumption of highly salted food. The developed prediction model showed good predictive performance in terms of discrimination (optimism-corrected c-index: 0.768) and calibration (Nam and d'Agostino's χ(2) test: 14.78; p values = 0.06). Estimates of the 10-year probability of gastric cancer occurrence ranged from 0.04% (0.02, 0.1) to 14.87% (8.96, 24.14) for men and from 0.03% (0.02, 0.07) to 4.91% (2.71, 8.81) for women. In conclusion, we developed a risk prediction model for gastric cancer that combines clinical and biological markers. It might prompt individuals to modify their lifestyle habits, attend regular check-up visits or participate in screening programmes. © 2015 UICC.
Kowall, Bernd; Kuß, Oliver; Schmidt‐Pokrzywniak, Andrea; Weinreich, Gerhard; Dragano, Nico; Moebus, Susanne; Erbel, Raimund; Jöckel, Karl‐Heinz; Stang, Andreas
2016-01-01
Aim The sleep disturbing effect of many drugs is derived from clinical trials with highly selected patient collectives. However, the generalizability of such findings to the general population is questionable. Our aim was to assess the association between intake of drugs labelled as sleep disturbing and self‐reported nocturnal sleep disturbances in a population‐based study. Methods We used data of 4221 participants (50.0% male) aged 45 to 75 years from the baseline examination of the Heinz Nixdorf Recall Study in Germany. The interview provided information on difficulties falling asleep, difficulties maintaining sleep and early morning arousal. We used the summary of product characteristics (SPC) for each drug taken and assigned the probability of sleep disturbances. Thereafter, we calculated cumulative probabilities of sleep disturbances per subject to account for polypharmacy. We estimated prevalence ratios (PR) using log Poisson regression models with robust variance. Results The adjusted PRs of any regular nocturnal sleep disorder per additional sleep disturbing drug were 1.01 (95% confidence interval (CI) 0.97, 1.06) and 1.03 (95% CI 1.00, 1.07) for men and women, respectively. Estimates for each regular nocturnal sleep disturbance were similarly close to 1. PRs for regular nocturnal sleep disturbances did not increase with rising cumulative probability for drug‐related sleep disturbances. Conclusions SPC‐based probabilities of drug‐related sleep disturbances showed barely any association with self‐reported regular nocturnal sleep disturbances. We conclude that SPC‐based probability information may lack generalizability to the general population or may be of limited data quality. PMID:27279554
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals
NASA Astrophysics Data System (ADS)
Buchert, Thomas; France, Martin J.; Steiner, Frank
2017-05-01
Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k = P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Evaluating detection probabilities for American marten in the Black Hills, South Dakota
Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.
2007-01-01
Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.
Xiao, Zhu; Liu, Hongjing; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-11-04
In this paper, we investigate the coverage performance and energy efficiency of multi-tier heterogeneous cellular networks (HetNets) which are composed of macrocells and different types of small cells, i.e., picocells and femtocells. By virtue of stochastic geometry tools, we model the multi-tier HetNets based on a Poisson point process (PPP) and analyze the Signal to Interference Ratio (SIR) via studying the cumulative interference from pico-tier and femto-tier. We then derive the analytical expressions of coverage probabilities in order to evaluate coverage performance in different tiers and investigate how it varies with the small cells' deployment density. By taking the fairness and user experience into consideration, we propose a disjoint channel allocation scheme and derive the system channel throughput for various tiers. Further, we formulate the energy efficiency optimization problem for multi-tier HetNets in terms of throughput performance and resource allocation fairness. To solve this problem, we devise a linear programming based approach to obtain the available area of the feasible solutions. System-level simulations demonstrate that the small cells' deployment density has a significant effect on the coverage performance and energy efficiency. Simulation results also reveal that there exits an optimal small cell base station (SBS) density ratio between pico-tier and femto-tier which can be applied to maximize the energy efficiency and at the same time enhance the system performance. Our findings provide guidance for the design of multi-tier HetNets for improving the coverage performance as well as the energy efficiency.
Xiao, Zhu; Liu, Hongjing; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, we investigate the coverage performance and energy efficiency of multi-tier heterogeneous cellular networks (HetNets) which are composed of macrocells and different types of small cells, i.e., picocells and femtocells. By virtue of stochastic geometry tools, we model the multi-tier HetNets based on a Poisson point process (PPP) and analyze the Signal to Interference Ratio (SIR) via studying the cumulative interference from pico-tier and femto-tier. We then derive the analytical expressions of coverage probabilities in order to evaluate coverage performance in different tiers and investigate how it varies with the small cells’ deployment density. By taking the fairness and user experience into consideration, we propose a disjoint channel allocation scheme and derive the system channel throughput for various tiers. Further, we formulate the energy efficiency optimization problem for multi-tier HetNets in terms of throughput performance and resource allocation fairness. To solve this problem, we devise a linear programming based approach to obtain the available area of the feasible solutions. System-level simulations demonstrate that the small cells’ deployment density has a significant effect on the coverage performance and energy efficiency. Simulation results also reveal that there exits an optimal small cell base station (SBS) density ratio between pico-tier and femto-tier which can be applied to maximize the energy efficiency and at the same time enhance the system performance. Our findings provide guidance for the design of multi-tier HetNets for improving the coverage performance as well as the energy efficiency. PMID:27827917
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
Fracture network evaluation program (FraNEP): A software for analyzing 2D fracture trace-line maps
NASA Astrophysics Data System (ADS)
Zeeb, Conny; Gomez-Rivas, Enrique; Bons, Paul D.; Virgo, Simon; Blum, Philipp
2013-10-01
Fractures, such as joints, faults and veins, strongly influence the transport of fluids through rocks by either enhancing or inhibiting flow. Techniques used for the automatic detection of lineaments from satellite images and aerial photographs, LIDAR technologies and borehole televiewers significantly enhanced data acquisition. The analysis of such data is often performed manually or with different analysis software. Here we present a novel program for the analysis of 2D fracture networks called FraNEP (Fracture Network Evaluation Program). The program was developed using Visual Basic for Applications in Microsoft Excel™ and combines features from different existing software and characterization techniques. The main novelty of FraNEP is the possibility to analyse trace-line maps of fracture networks applying the (1) scanline sampling, (2) window sampling or (3) circular scanline and window method, without the need of switching programs. Additionally, binning problems are avoided by using cumulative distributions, rather than probability density functions. FraNEP is a time-efficient tool for the characterisation of fracture network parameters, such as density, intensity and mean length. Furthermore, fracture strikes can be visualized using rose diagrams and a fitting routine evaluates the distribution of fracture lengths. As an example of its application, we use FraNEP to analyse a case study of lineament data from a satellite image of the Oman Mountains.
Impact strength of small icy bodies that experienced multiple collisions
NASA Astrophysics Data System (ADS)
Yasui, Minami; Hayama, Ryo; Arakawa, Masahiko
2014-05-01
Frequent collisions are common for small bodies in the Solar System, and the cumulative damage to these bodies is thought to significantly affect their evolution. It is important to study the effects of multiple impacts such as the number of impacts on the impact strength and the ejection velocity of impact fragments. Here we conducted multiple-impact experiments using a polycrystalline water ice target, varying the number of impacts from 1 to 10 times. An ice cylindrical projectile was impacted at 84-502 m s-1 by using a single-stage gas gun in a cold room between -10 and -15 °C. The impact strength of the ice target that experienced a single impact and multiple impacts is expressed by the total energy density applied to the same target, ΣQ, and this value was observed to be 77.6 J kg-1. The number of fine impact fragments at a fragment mass normalized by an initial target mass, m/Mt0 ∼ 10-6, nm, had a good correlation with the single energy density at each shot, Qj, and the relationship was shown to be nm=10·Qj1.31±0.12. We also estimated the cumulative damage of icy bodies as a total energy density accumulated by past impacts, according to the crater scaling laws proposed by Housen et al. (Housen, K.R., Schmidt, R.M., Holsapple, K.A. [1983]. J. Geophys. Res. 88, 2485-2499) of ice and the crater size distributions observed on Phoebe, a saturnian icy satellite. We found that the cumulative damage of Phoebe depended significantly on the impact speed of the impactor that formed the craters on Phoebe; and the cumulative damage was about one-third of the impact strength ΣQ* at 500 m s-1 whereas it was almost zero at 3.2 km s-1.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Postfragmentation density function for bacterial aggregates in laminar flow
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John
2014-01-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205
Wasser, Samuel K.; Hayward, Lisa S.; Hartman, Jennifer; Booth, Rebecca K.; Broms, Kristin; Berg, Jodi; Seely, Elizabeth; Lewis, Lyle; Smith, Heath
2012-01-01
State and federal actions to conserve northern spotted owl (Strix occidentalis caurina) habitat are largely initiated by establishing habitat occupancy. Northern spotted owl occupancy is typically assessed by eliciting their response to simulated conspecific vocalizations. However, proximity of barred owls (Strix varia)–a significant threat to northern spotted owls–can suppress northern spotted owl responsiveness to vocalization surveys and hence their probability of detection. We developed a survey method to simultaneously detect both species that does not require vocalization. Detection dogs (Canis familiaris) located owl pellets accumulated under roost sites, within search areas selected using habitat association maps. We compared success of detection dog surveys to vocalization surveys slightly modified from the U.S. Fish and Wildlife Service’s Draft 2010 Survey Protocol. Seventeen 2 km ×2 km polygons were each surveyed multiple times in an area where northern spotted owls were known to nest prior to 1997 and barred owl density was thought to be low. Mitochondrial DNA was used to confirm species from pellets detected by dogs. Spotted owl and barred owl detection probabilities were significantly higher for dog than vocalization surveys. For spotted owls, this difference increased with number of site visits. Cumulative detection probabilities of northern spotted owls were 29% after session 1, 62% after session 2, and 87% after session 3 for dog surveys, compared to 25% after session 1, increasing to 59% by session 6 for vocalization surveys. Mean detection probability for barred owls was 20.1% for dog surveys and 7.3% for vocal surveys. Results suggest that detection dog surveys can complement vocalization surveys by providing a reliable method for establishing occupancy of both northern spotted and barred owl without requiring owl vocalization. This helps meet objectives of Recovery Actions 24 and 25 of the Revised Recovery Plan for the Northern Spotted Owl. PMID:22916175
NASA Astrophysics Data System (ADS)
Peeters, L. J.; Mallants, D.; Turnadge, C.
2017-12-01
Groundwater impact assessments are increasingly being undertaken in a probabilistic framework whereby various sources of uncertainty (model parameters, model structure, boundary conditions, and calibration data) are taken into account. This has resulted in groundwater impact metrics being presented as probability density functions and/or cumulative distribution functions, spatial maps displaying isolines of percentile values for specific metrics, etc. Groundwater management on the other hand typically uses single values (i.e., in a deterministic framework) to evaluate what decisions are required to protect groundwater resources. For instance, in New South Wales, Australia, a nominal drawdown value of two metres is specified by the NSW Aquifer Interference Policy as trigger-level threshold. In many cases, when drawdowns induced by groundwater extraction exceed two metres, "make-good" provisions are enacted (such as the surrendering of extraction licenses). The information obtained from a quantitative uncertainty analysis can be used to guide decision making in several ways. Two examples are discussed here: the first of which would not require modification of existing "deterministic" trigger or guideline values, whereas the second example assumes that the regulatory criteria are also expressed in probabilistic terms. The first example is a straightforward interpretation of calculated percentile values for specific impact metrics. The second examples goes a step further, as the previous deterministic thresholds do not currently allow for a probabilistic interpretation; e.g., there is no statement that "the probability of exceeding the threshold shall not be larger than 50%". It would indeed be sensible to have a set of thresholds with an associated acceptable probability of exceedance (or probability of not exceeding a threshold) that decreases as the impact increases. We here illustrate how both the prediction uncertainty and management rules can be expressed in a probabilistic framework, using groundwater metrics derived for a highly stressed groundwater system.
Wasser, Samuel K; Hayward, Lisa S; Hartman, Jennifer; Booth, Rebecca K; Broms, Kristin; Berg, Jodi; Seely, Elizabeth; Lewis, Lyle; Smith, Heath
2012-01-01
State and federal actions to conserve northern spotted owl (Strix occidentalis caurina) habitat are largely initiated by establishing habitat occupancy. Northern spotted owl occupancy is typically assessed by eliciting their response to simulated conspecific vocalizations. However, proximity of barred owls (Strix varia)-a significant threat to northern spotted owls-can suppress northern spotted owl responsiveness to vocalization surveys and hence their probability of detection. We developed a survey method to simultaneously detect both species that does not require vocalization. Detection dogs (Canis familiaris) located owl pellets accumulated under roost sites, within search areas selected using habitat association maps. We compared success of detection dog surveys to vocalization surveys slightly modified from the U.S. Fish and Wildlife Service's Draft 2010 Survey Protocol. Seventeen 2 km × 2 km polygons were each surveyed multiple times in an area where northern spotted owls were known to nest prior to 1997 and barred owl density was thought to be low. Mitochondrial DNA was used to confirm species from pellets detected by dogs. Spotted owl and barred owl detection probabilities were significantly higher for dog than vocalization surveys. For spotted owls, this difference increased with number of site visits. Cumulative detection probabilities of northern spotted owls were 29% after session 1, 62% after session 2, and 87% after session 3 for dog surveys, compared to 25% after session 1, increasing to 59% by session 6 for vocalization surveys. Mean detection probability for barred owls was 20.1% for dog surveys and 7.3% for vocal surveys. Results suggest that detection dog surveys can complement vocalization surveys by providing a reliable method for establishing occupancy of both northern spotted and barred owl without requiring owl vocalization. This helps meet objectives of Recovery Actions 24 and 25 of the Revised Recovery Plan for the Northern Spotted Owl.
NASA Astrophysics Data System (ADS)
Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi
2016-04-01
Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.
Model-Free CUSUM Methods for Person Fit
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Shi, Min
2009-01-01
This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…
An Attachment Theory Approach to Narrating the Faith Journey of Children of Parental Divorce
ERIC Educational Resources Information Center
Kiesling, Chris
2011-01-01
This study explores the effects of parental divorce on a child's faith. Drawing from attachment theory, Granqvist and Kirkpatrick proposed two probable developmental pathways to religion. For those with secure attachment, whose cumulative experiences of sensitive, religious caregivers enhance the development of a God image as loving; belief…
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
A vessel noise budget for Admiralty Inlet, Puget Sound, Washington (USA).
Bassett, Christopher; Polagye, Brian; Holt, Marla; Thomson, Jim
2012-12-01
One calendar year of Automatic Identification System (AIS) ship-traffic data was paired with hydrophone recordings to assess ambient noise in northern Admiralty Inlet, Puget Sound, WA (USA) and to quantify the contribution of vessel traffic. The study region included inland waters of the Salish Sea within a 20 km radius of the hydrophone deployment site. Spectra and hourly, daily, and monthly ambient noise statistics for unweighted broadband (0.02-30 kHz) and marine mammal, or M-weighted, sound pressure levels showed variability driven largely by vessel traffic. Over the calendar year, 1363 unique AIS transmitting vessels were recorded, with at least one AIS transmitting vessel present in the study area 90% of the time. A vessel noise budget was calculated for all vessels equipped with AIS transponders. Cargo ships were the largest contributor to the vessel noise budget, followed by tugs and passenger vessels. A simple model to predict received levels at the site based on an incoherent summation of noise from different vessels resulted in a cumulative probability density function of broadband sound pressure levels that shows good agreement with 85% of the temporal data.
Balsillie, J.H.; Donoghue, J.F.; Butler, K.M.; Koch, J.L.
2002-01-01
Two-dimensional plotting tools can be of invaluable assistance in analytical scientific pursuits, and have been widely used in the analysis and interpretation of sedimentologic data. We consider, in this work, the use of arithmetic probability paper (APP). Most statistical computer applications do not allow for the generation of APP plots, because of apparent intractable nonlinearity of the percentile (or probability) axis of the plot. We have solved this problem by identifying an equation(s) for determining plotting positions of Gaussian percentiles (or probabilities), so that APP plots can easily be computer generated. An EXCEL example is presented, and a programmed, simple-to-use EXCEL application template is hereby made publicly available, whereby a complete granulometric analysis including data listing, moment measure calculations, and frequency and cumulative APP plots, is automatically produced.
The Italian national trends in smoking initiation and cessation according to gender and education.
Sardu, C; Mereu, A; Minerba, L; Contu, P
2009-09-01
OBJECTIVES. This study aims to assess the trend in initiation and cessation of smoking across successive birth cohorts, according to gender and education, in order to provide useful suggestion for tobacco control policy. STUDY DESIGN. The study is based on data from the "Health conditions and resort to sanitary services" survey carried out in Italy from October 2004 to September 2005 by the National Institute of Statistics. Through a multisampling procedure a sample representative of the entire national territory was selected. In order to calculate trends in smoking initiation and cessation, data were stratified for birth cohorts, gender and education level, and analyzed through the life table method. The cumulative probability of smoking initiation, across subsequent generations, shows a downward trend followed by a plateau. This result highlights that there is not a shred of evidence to support the hypothesis of an anticipation in smoking initiation. The cumulative probability of quitting, across subsequent generations, follows an upward trend, highlighting the growing tendency of smokers to become an "early quitter", who give up within 30 years of age. Results suggest that the Italian antismoking approach, for the most part targeted at preventing the initiation of smoking emphasising the negative consequences, has an effect on the early smoking cessation. Health policies should reinforce the existing trend of "early quitting" through specific actions. In addition our results show that men with low education exhibit the higher probability of smoking initiation and the lower probability of early quitting, and therefore should be targeted with special attention.
Asymptotic behavior of the daily increment distribution of the IPC, the mexican stock market index
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-02-01
In this work, a statistical analysis of the distribution of daily fluctuations of the IPC, the Mexican Stock Market Index is presented. A sample of the IPC covering the 13-year period 04/19/1990 - 08/21/2003 was analyzed and the cumulative probability distribution of its daily logarithmic variations studied. Results showed that the cumulative distribution function for extreme variations, can be described by a Pareto-Levy model with shape parameters alpha=3.634 +- 0.272 and alpha=3.540 +- 0.278 for its positive and negative tails respectively. This result is consistent with previous studies, where it has been found that 2.5< alpha <4 for other financial markets worldwide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Stoll, Richard; Cappel, I; Jablonski-Momeni, Anahita; Pieper, K; Stachniss, V
2007-01-01
This study evaluated the long-term survival of inlays and partial crowns made of IPS Empress. For this purpose, the patient data of a prospective study were examined in retrospect and statistically evaluated. All of the inlays and partial crowns fabricated of IPS-Empress within the Department of Operative Dentistry at the School of Dental Medicine of Philipps University, Marburg, Germany were systematically recorded in a database between 1991 and 2001. The corresponding patient files were revised at the end of 2001. The information gathered in this way was used to evaluate the survival of the restorations using the method described by Kaplan and Meyer. A total of n = 1624 restorations were fabricated of IPS-Empress within the observation period. During this time, n = 53 failures were recorded. The remaining restorations were observed for a mean period of 18.77 months. The failures were mainly attributed to fractures, endodontic problems and cementation errors. The last failure was established after 82 months. At this stage, a cumulative survival probability of p = 0.81 was registered with a standard error of 0.04. At this time, n = 30 restorations were still being observed. Restorations on vital teeth (n = 1588) showed 46 failures, with a cumulative survival probability of p = 0.82. Restorations performed on non-vital teeth (n = 36) showed seven failures, with a cumulative survival probability of p = 0.53. Highly significant differences were found between the two groups (p < 0.0001) in a log-rank test. No significant difference (p = 0.41) was found between the patients treated by students (n = 909) and those treated by qualified dentists (n = 715). Likewise, no difference (p = 0.13) was established between the restorations seated with a high viscosity cement (n = 295) and those placed with a low viscosity cement (n = 1329).
Embolic Strokes of Undetermined Source in the Athens Stroke Registry: An Outcome Analysis.
Ntaios, George; Papavasileiou, Vasileios; Milionis, Haralampos; Makaritsis, Konstantinos; Vemmou, Anastasia; Koroboki, Eleni; Manios, Efstathios; Spengos, Konstantinos; Michel, Patrik; Vemmos, Konstantinos
2015-08-01
Information about outcomes in Embolic Stroke of Undetermined Source (ESUS) patients is unavailable. This study provides a detailed analysis of outcomes of a large ESUS population. Data set was derived from the Athens Stroke Registry. ESUS was defined according to the Cryptogenic Stroke/ESUS International Working Group criteria. End points were mortality, stroke recurrence, functional outcome, and a composite cardiovascular end point comprising recurrent stroke, myocardial infarction, aortic aneurysm rupture, systemic embolism, or sudden cardiac death. We performed Kaplan-Meier analyses to estimate cumulative probabilities of outcomes by stroke type and Cox-regression to investigate whether stroke type was outcome predictor. 2731 patients were followed-up for a mean of 30.5±24.1months. There were 73 (26.5%) deaths, 60 (21.8%) recurrences, and 78 (28.4%) composite cardiovascular end points in the 275 ESUS patients. The cumulative probability of survival in ESUS was 65.6% (95% confidence intervals [CI], 58.9%-72.2%), significantly higher compared with cardioembolic stroke (38.8%, 95% CI, 34.9%-42.7%). The cumulative probability of stroke recurrence in ESUS was 29.0% (95% CI, 22.3%-35.7%), similar to cardioembolic strokes (26.8%, 95% CI, 22.1%-31.5%), but significantly higher compared with all types of noncardioembolic stroke. One hundred seventy-two (62.5%) ESUS patients had favorable functional outcome compared with 280 (32.2%) in cardioembolic and 303 (60.9%) in large-artery atherosclerotic. ESUS patients had similar risk of composite cardiovascular end point as all other stroke types, with the exception of lacunar strokes, which had significantly lower risk (adjusted hazard ratio, 0.70 [95% CI, 0.52-0.94]). Long-term mortality risk in ESUS is lower compared with cardioembolic strokes, despite similar rates of recurrence and composite cardiovascular end point. Recurrent stroke risk is higher in ESUS than in noncardioembolic strokes. © 2015 American Heart Association, Inc.
Ayzac, L; Girard, R; Baboi, L; Beuret, P; Rabilloud, M; Richard, J C; Guérin, C
2016-05-01
The goal of this study was to assess the impact of prone positioning on the incidence of ventilator-associated pneumonia (VAP) and the role of VAP in mortality in a recent multicenter trial performed on patients with severe ARDS. An ancillary study of a prospective multicenter randomized controlled trial on early prone positioning in patients with severe ARDS. In suspected cases of VAP the diagnosis was based on positive quantitative cultures of bronchoalveolar lavage fluid or tracheal aspirate at the 10(4) and 10(7) CFU/ml thresholds, respectively. The VAP cases were then subject to central, independent adjudication. The cumulative probabilities of VAP were estimated in each position group using the Aalen-Johansen estimator and compared using Gray's test. A univariate and a multivariate Cox model was performed to assess the impact of VAP, used as a time-dependent covariate for mortality hazard during the ICU stay. In the supine and prone position groups, the incidence rate for VAP was 1.18 (0.86-1.60) and 1.54 (1.15-2.02) per 100 days of invasive mechanical ventilation (p = 0.10), respectively. The cumulative probability of VAP at 90 days was estimated at 46.5 % (27-66) in the prone group and at 33.5 % (23-44) in the supine group. The difference between the two cumulative probability curves was not statistically significant (p = 0.11). In the univariate Cox model, VAP was associated with an increase in the mortality rate during the ICU stay [HR 1.65 (1.05-2.61), p = 0.03]. HR increased to 2.2 (1.39-3.52) (p < 0.001) after adjustment for position group, age, SOFA score, McCabe score, and immunodeficiency. In severe ARDS patients prone positioning did not reduce the incidence of VAP and VAP was associated with higher mortality.
Bissonnette, Sarah; Goeres, Leah M; Lee, David S H
2016-01-01
To characterize the pharmacy density in rural and urban communities with hospitals and to examine its association with readmission rates. Ecologic study. Forty-eight rural and urban primary care service areas (PCSAs) in the state of Oregon. All hospitals in the state of Oregon. Pharmacy data were obtained from the Oregon Board of Pharmacy based on active licensure. Pharmacy density was calculated by determining the cumulative number of outpatient pharmacy hours in a PCSA. Oregon hospital 30-day all-cause readmission rates were obtained from the Centers for Medicare and Medicaid Services and were determined with the use of claims data of patients 65 years of age or older who were readmitted to the hospital within 30 days from July 2012 to June 2013. Readmission rates for Oregon hospitals ranged from 13.5% to 16.5%. The cumulative number of pharmacy hours in PCSAs containing a hospital ranged from 54 to 3821 hours. As pharmacy density increased, the readmission rates decreased, asymptotically approaching a predicted 14.7% readmission rate for areas with high pharmacy density. Urban hospitals were in communities likely to have more pharmacy access compared with rural hospitals. Future research should determine if increasing pharmacy access affects readmission rates, especially in rural communities. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Mital, Subodh K.; Shah, Ashwin R.
1997-01-01
The properties of ceramic matrix composites (CMC's) are known to display a considerable amount of scatter due to variations in fiber/matrix properties, interphase properties, interphase bonding, amount of matrix voids, and many geometry- or fabrication-related parameters, such as ply thickness and ply orientation. This paper summarizes preliminary studies in which formal probabilistic descriptions of the material-behavior- and fabrication-related parameters were incorporated into micromechanics and macromechanics for CMC'S. In this process two existing methodologies, namely CMC micromechanics and macromechanics analysis and a fast probability integration (FPI) technique are synergistically coupled to obtain the probabilistic composite behavior or response. Preliminary results in the form of cumulative probability distributions and information on the probability sensitivities of the response to primitive variables for a unidirectional silicon carbide/reaction-bonded silicon nitride (SiC/RBSN) CMC are presented. The cumulative distribution functions are computed for composite moduli, thermal expansion coefficients, thermal conductivities, and longitudinal tensile strength at room temperature. The variations in the constituent properties that directly affect these composite properties are accounted for via assumed probabilistic distributions. Collectively, the results show that the present technique provides valuable information about the composite properties and sensitivity factors, which is useful to design or test engineers. Furthermore, the present methodology is computationally more efficient than a standard Monte-Carlo simulation technique; and the agreement between the two solutions is excellent, as shown via select examples.
[Genetic polymorphisms of 21 non-CODIS STR loci].
Shao, Wei-bo; Zhang, Su-hua; Li, Li
2011-02-01
To investigate genetic polymorphisms of 21 non-CODIS STR loci in Han population from the east of China and to explore their forensic application value. Twenty-one non-CODIS STR loci, were amplified with AGCU 21+1 STR kit and DNA samples were obtained from 225 unrelated individuals of the Han population from the east of China. The PCR products were analyzed with 3130 Genetic Analyzer and genotyped with GeneMapper ID v3.2 software. The genetic data were statistically analyzed with PowerStats v12.xls and Cervus 2.0 software. The distributions of 21 non-CODIS STR loci satisfied the Hardy-Weinberg equilibration. The heterozygosity (H) distributions were 0.596-0.804, the discrimination power (DP) were 0.764-0.948, the probability of exclusion of duo-testing (PEduo) were 0.176-0.492, the probability of exclusion of trios-testing (PEtrio) were 0.334-0.663, and the polymorphic information content (PIC) were 0.522-0.807. The cumulative probability of exclusion (CPE) of duo-testing was 0.999707, the CPE of trios-testing was 0.9999994, and the cumulated discrimination power (CDP) was 0.99999999999999999994. Twenty-one non-CODIS STR loci are highly polymorphic. They can be effectively used in personal identification and paternity testing in trios cases. They can also be used as supplement in the difficult cases of diad paternity testing.
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
NASA Astrophysics Data System (ADS)
Khan, Tasneem M. A.; Khan, Asiya; Sarawade, Pradip B.
2018-05-01
We report a method to synthesize low-density transparent mesoporous silica aerogel beads by ambient pressure drying (APD). The beads were prepared by acid-base sol-gel polymerization of sodium silicate in via the ball dropping method (BDM). To minimize shrinkage during drying, wet silica beads were initially prepared; their surfaces were then modified using trimethylchlorosilane (TMCS) via simultaneous solvent exchange and surface modification. The specific surface area and cumulative pore volume of the silica aerogel beads increased with an increase in the %V of TMCS. Silica aerogel beads with low packing bed density, high surface area, and large cumulative pore volume was obtained when TMCS was used. Properties of the final product were examined by BET, and TG-DT analyses. The hydrophobic silica aerogel beads were thermally stable up to 350°C. We discuss our results and compare our findings for modified versus unmodified silica beads.
Kinetic field theory: exact free evolution of Gaussian phase-space correlations
NASA Astrophysics Data System (ADS)
Fabis, Felix; Kozlikin, Elena; Lilow, Robert; Bartelmann, Matthias
2018-04-01
In recent work we developed a description of cosmic large-scale structure formation in terms of non-equilibrium ensembles of classical particles, with time evolution obtained in the framework of a statistical field theory. In these works, the initial correlations between particles sampled from random Gaussian density and velocity fields have so far been treated perturbatively or restricted to pure momentum correlations. Here we treat the correlations between all phase-space coordinates exactly by adopting a diagrammatic language for the different forms of correlations, directly inspired by the Mayer cluster expansion. We will demonstrate that explicit expressions for phase-space density cumulants of arbitrary n-point order, which fully capture the non-linear coupling of free streaming kinematics due to initial correlations, can be obtained from a simple set of Feynman rules. These cumulants will be the foundation for future investigations of perturbation theory in particle interactions.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Li, Ni; Xu, Jing-Hang; Yu, Min; Wang, Sa; Si, Chong-Wen; Yu, Yan-Yan
2015-11-21
To investigate whether long-term low-level hepatitis B virus (HBV) DNA influences dynamic changes of the FIB-4 index in chronic hepatitis B (CHB) patients receiving entecavir (ETV) therapy with partial virological responses. We retrospectively analyzed 231 nucleos(t)ide (NA) naïve CHB patients from our previous study (NCT01926288) who received continuous ETV or ETV maleate therapy for three years. The patients were divided into partial virological response (PVR) and complete virological response (CVR) groups according to serum HBV DNA levels at week 48. Seventy-six patients underwent biopsies at baseline and at 48 wk. The performance of the FIB-4 index and area under the receiver operating characteristic (AUROC) curve for predicting fibrosis were determined for the patients undergoing biopsy. The primary objective of the study was to compare the cumulative probabilities of virological responses between the two groups during the treatment period. The secondary outcome was to observe dynamic changes of the FIB-4 index between CVR patients and PVR patients. For hepatitis B e antigen (HBeAg)-positive patients (n = 178), the cumulative probability of achieving undetectable levels at week 144 was 95% and 69% for CVR and PVR patients, respectively (P < 0.001). In the Cox proportional hazards model, a lower pretreatment serum HBV DNA level was an independent factor predicting maintained viral suppression. The cumulative probability of achieving undetectable levels of HBV DNA for HBeAg-negative patients (n = 53) did not differ between the two groups. The FIB-4 index efficiently identified fibrosis, with an AUROC of 0.80 (95%CI: 0.69-0.89). For HBeAg-positive patients, the FIB-4 index was higher in CVR patients than in PVR patients at baseline (1.89 ± 1.43 vs 1.18 ± 0.69, P < 0.001). There was no significant difference in the reduction of the FIB-4 index between the CVR and PVR groups from weeks 48 to 144 (-0.11 ± 0.47 vs -0.13 ± 0.49, P = 0.71). At week 144, the FIB-4 index levels were similar between the two groups (1.24 ± 0.87 vs 1.02 ± 0.73, P = 0.06). After multivariate logistic regression analysis, a lower baseline serum HBV DNA level was associated with improvement of liver fibrosis. In HBeAg-negative patients, the FIB-4 index did not differ between the two groups. The cumulative probabilities of HBV DNA responses showed significant differences between CVR and PVR HBeAg-positive CHB patients undergoing entecavir treatment for 144 wk. However, long-term low-level HBV DNA did not deteriorate the FIB-4 index, which was used to evaluate liver fibrosis, at the end of three years.
Average luminosity distance in inhomogeneous universes
NASA Astrophysics Data System (ADS)
Kostov, Valentin Angelov
Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.
Efficacy and safety of medical therapy for low bone mineral density in patients with Crohn disease
Zhao, Xiaojing; Zhou, Changcheng; Chen, Han; Ma, Jingjing; Zhu, Yunjuan; Wang, Peixue; Zhang, Yi; Ma, Haiqin; Zhang, Hongjie
2017-01-01
Abstract Background: Low bone mineral density (BMD) is a frequent complication of inflammatory bowel disease (IBD), particularly in patients with Crohn disease (CD). The aim of our study is to determine the efficacy and safety of different drugs used to treat low BMD in patients with CD. Methods: PUBMED/MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials were searched for eligible studies. A random-effects model within a Bayesian framework was applied to compare treatment effects as standardized mean difference (SMD) with their corresponding 95% credible interval (CrI), while odds ratio (OR) was applied to compare adverse events with 95% CrI. The surface under the cumulative ranking area (SUCRA) was calculated to make the ranking of the treatments for outcomes. Results: Twelve randomized controlled trials (RCTs) were eligible. Compared with placebo, zoledronate (SMDs 2.74, 95% CrI 1.36–4.11) and sodium-fluoride (SMDs 1.23, 95% CrI 0.19–2.26) revealed statistical significance in increasing lumbar spine BMD (LSBMD). According to SUCRA ranking, zoledronate (SUCRA = 2.5%) might have the highest probability to be the best treatment for increasing LSBMD in CD patients among all agents, followed by sodium-fluoride (27%). For safety assessment, the incidence of adverse events (AEs) demonstrated no statistical difference between agents and placebo. The corresponding SUCRA values indicated that risedronate (SUCRA = 77%) might be the most safe medicine for low BMD in CD patients and alendronate ranked the worst (SUCRA = 16%). Conclusions: Zoledronate might have the highest probability to be the best therapeutic strategy for increasing LSBMD. For the safety assessment, risedronate showed the greatest trend to decrease the risk of AEs. In the future, more RCTs with higher qualities are needed to make head-to-head comparison between 2 or more treatments. PMID:28296781
Chronostratigraphical Subdivision of the Late Glacial and the Holocene for the Alaska Region
NASA Astrophysics Data System (ADS)
Michczynska, D. J.; Hajdas, I.
2009-04-01
Our work is a kind of so called data mining. The first step of our work was collection of the radiocarbon data for samples coming from Alaska. We construct data base using Radiocarbon Measurements Lists published by different radiocarbon laboratories (mainly in the journal 'Radiocaron'). The next step was careful analysis of collected dates. We excluded from our analysis all dates suspected of contamination by younger or older organic matter. Such fact could be stated, for instance, on the base of inconsistency of radiocarbon age and stratigraphy or palynology. Finally, we calibrated whole large set of chosen radiocarbon dates and construct probability density function (PDF). Analysis of the shape of PDF was the subject of the previous research (eg. Michczynska and Pazdur, 2004; Macklin et al., 2006; Starkel et al., 2006, Michczynska et al., 2007). In our analysis we take into account the distinct tendency to collect samples from specific horizons. It is a general rule to take samples for radiocarbon dating from places of visible sedimentation changes or changes in palynological diagram. Therefore the culminations of the PDF represent periods of environmental changes and could be helpful in identifying the chronostratigraphical boundaries on the calendar time scale. References: Michczyńska D.J., Pazdur A., 2004. A shape analysis of cumulative probability density function of radiocarbon dates set in the study of climate change in Late Glacial and Holocene. Radiocarbon 46(2): 733-744. Michczyńska D.J., Michczyński A., Pazdur A. 2007. Frequency distribution of radiocarbon dates as a tool for reconstructing environmental changes. Radiocarbon 49(2): 799-806. Macklin M.G., Benito G., Gregory K.J., Johnstone E., Lewin J., Michczyńska D.J., Soja R., Starkel L., Thorndycraft V.R., 2006. Past hydrological events reflected in the Holocene fluvial record of Europe. CATENA 66: 145-154. Starkel L., Soja R., Michczyńska D.J., 2006. Past hydrological events reflected in Holocene history of Polish rivers. CATENA 66: 24-33.
Analysis of the progressive failure of brittle matrix composites
NASA Technical Reports Server (NTRS)
Thomas, David J.
1995-01-01
This report investigates two of the most common modes of localized failures, namely, periodic fiber-bridged matrix cracks and transverse matrix cracks. A modification of Daniels' bundle theory is combined with Weibull's weakest link theory to model the statistical distribution of the periodic matrix cracking strength for an individual layer. Results of the model predictions are compared with experimental data from the open literature. Extensions to the model are made to account for possible imperfections within the layer (i.e., nonuniform fiber lengths, irregular crack spacing, and degraded in-situ fiber properties), and the results of these studies are presented. A generalized shear-lag analysis is derived which is capable of modeling the development of transverse matrix cracks in material systems having a general multilayer configuration and under states of full in-plane load. A method for computing the effective elastic properties for the damaged layer at the global level is detailed based upon the solution for the effects of the damage at the local level. This methodology is general in nature and is therefore also applicable to (0(sub m)/90(sub n))(sub s) systems. The characteristic stress-strain response for more general cases is shown to be qualitatively correct (experimental data is not available for a quantitative evaluation), and the damage evolution is recorded in terms of the matrix crack density as a function of the applied strain. Probabilistic effects are introduced to account for the statistical nature of the material strengths, thus allowing cumulative distribution curves for the probability of failure to be generated for each of the example laminates. Additionally, Oh and Finney's classic work on fracture location in brittle materials is extended and combined with the shear-lag analysis. The result is an analytical form for predicting the probability density function for the location of the next transverse crack occurrence within a crack bounded region. The results of this study verified qualitatively the validity of assuming a uniform crack spacing (as was done in the shear-lag model).
ERIC Educational Resources Information Center
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Large-deviation theory for diluted Wishart random matrices
NASA Astrophysics Data System (ADS)
Castillo, Isaac Pérez; Metz, Fernando L.
2018-03-01
Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.
Qin, Guangming; Lu, Lihong; Xiao, Yufei; Zhu, Yimiao; Pan, Wensheng; Xu, Xiang; Shen, Shengrong; Das, Undurti N
2014-07-28
The aim of this study was to investigate the possible correlation between levels of serum liver enzymes and impaired fasting glucose (IFG) in Chinese adults and to provide a new perspective for the prevention of pre-diabetes. Serum liver enzymes of the samples including alanine aminotransferase (ALT), aspartate aminotransferase (AST), and g-glutamyl transferase (GGT), as well as plasma glucose, blood lipids, and insulin, were measured. The cumulative incidences of IFG between different quartiles of liver enzymes were compared by the chi-square test. A logistic regression model (binary regression) was used to calculate the odds ratio (OR) of IFG with 95% confidence interval (95% CI). The total incidence of IFG was 20.3% and the cumulative incidence of IFG was higher in men compared to women. In both sexes, IFG is more prevalent in higher quartiles of liver enzymes. After adjusting for age, BMI, blood pressure, triglycerides (TG), high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), and total cholesterol (TC), the cumulative incidences of IFG were significantly higher in the highest quartiles of liver enzymes than in the lowest quartiles. A significantly higher cumulative incidence of IFG was found in the highest GGT quartile than in the lowest quartile for woman. The results of this study suggest that serum liver enzymes are related to the risk of IFG in Chinese adults. We infer that preserving the hepatic function may be an efficient way to prevent the development of IFG, especially in males.
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
Cumulative uncertainty in measured streamflow and water quality data for small watersheds
Harmel, R.D.; Cooper, R.J.; Slade, R.M.; Haney, R.L.; Arnold, J.G.
2006-01-01
The scientific community has not established an adequate understanding of the uncertainty inherent in measured water quality data, which is introduced by four procedural categories: streamflow measurement, sample collection, sample preservation/storage, and laboratory analysis. Although previous research has produced valuable information on relative differences in procedures within these categories, little information is available that compares the procedural categories or presents the cumulative uncertainty in resulting water quality data. As a result, quality control emphasis is often misdirected, and data uncertainty is typically either ignored or accounted for with an arbitrary margin of safety. Faced with the need for scientifically defensible estimates of data uncertainty to support water resource management, the objectives of this research were to: (1) compile selected published information on uncertainty related to measured streamflow and water quality data for small watersheds, (2) use a root mean square error propagation method to compare the uncertainty introduced by each procedural category, and (3) use the error propagation method to determine the cumulative probable uncertainty in measured streamflow, sediment, and nutrient data. Best case, typical, and worst case "data quality" scenarios were examined. Averaged across all constituents, the calculated cumulative probable uncertainty (??%) contributed under typical scenarios ranged from 6% to 19% for streamflow measurement, from 4% to 48% for sample collection, from 2% to 16% for sample preservation/storage, and from 5% to 21% for laboratory analysis. Under typical conditions, errors in storm loads ranged from 8% to 104% for dissolved nutrients, from 8% to 110% for total N and P, and from 7% to 53% for TSS. Results indicated that uncertainty can increase substantially under poor measurement conditions and limited quality control effort. This research provides introductory scientific estimates of uncertainty in measured water quality data. The results and procedures presented should also assist modelers in quantifying the "quality"of calibration and evaluation data sets, determining model accuracy goals, and evaluating model performance.
A Markov chain technique for determining the acquisition behavior of a digital tracking loop
NASA Technical Reports Server (NTRS)
Chadwick, H. D.
1972-01-01
An iterative procedure is presented for determining the acquisition behavior of discrete or digital implementations of a tracking loop. The technique is based on the theory of Markov chains and provides the cumulative probability of acquisition in the loop as a function of time in the presence of noise and a given set of initial condition probabilities. A digital second-order tracking loop to be used in the Viking command receiver for continuous tracking of the command subcarrier phase was analyzed using this technique, and the results agree closely with experimental data.
ERIC Educational Resources Information Center
Unic, Ivana; Stalmeier, Peep F. M.; Peer, Petronella G. M.; van Daal, Willem A. J.
1997-01-01
Studies of variables predicting familial breast cancer (N=59) were analyzed to develop screening recommendations for women with nonhereditary familial breast cancer present. The pooled relative risk (RR) and cumulative probability were used to estimate risk. Data and conclusions are presented. Recommendations for screening and counseling are…
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
Revealing modified gravity signals in matter and halo hierarchical clustering
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.; Koyama, Kazuya; Bose, Benjamin; Zhao, Gong-Bo
2017-07-01
We use a set of N-body simulations employing a modified gravity (MG) model with Vainshtein screening to study matter and halo hierarchical clustering. As test-case scenarios we consider two normal branch Dvali-Gabadadze-Porrati (nDGP) gravity models with mild and strong growth rate enhancement. We study higher-order correlation functions ξn(R ) up to n =9 and associated reduced cumulants Sn(R )≡ξn(R )/σ (R )2n -2. We find that the matter probability distribution functions are strongly affected by the fifth force on scales up to 50 h-1 Mpc , and the deviations from general relativity (GR) are maximized at z =0 . For reduced cumulants Sn, we find that at small scales R ≤6 h-1 Mpc the MG is characterized by lower values, with the deviation growing from 7% in the reduced skewness up to even 40% in S5. To study the halo clustering we use a simple abundance matching and divide haloes into thee fixed number density samples. The halo two-point functions are weakly affected, with a relative boost of the order of a few percent appearing only at the smallest pair separations (r ≤5 h-1 Mpc ). In contrast, we find a strong MG signal in Sn(R )'s, which are enhanced compared to GR. The strong model exhibits a >3 σ level signal at various scales for all halo samples and in all cumulants. In this context, we find that the reduced kurtosis to be an especially promising cosmological probe of MG. Even the mild nDGP model leaves a 3 σ imprint at small scales R ≤3 h-1 Mpc , while the stronger model deviates from a GR signature at nearly all scales with a significance of >5 σ . Since the signal is persistent in all halo samples and over a range of scales, we advocate that the reduced kurtosis estimated from galaxy catalogs can potentially constitute a strong MG-model discriminatory as well as GR self-consistency test.
ERIC Educational Resources Information Center
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
Chiou, Wen-Yen; Chang, Chun-Ming; Tseng, Kuo-Chih; Hung, Shih-Kai; Lin, Hon-Yi; Chen, Yi-Chun; Su, Yu-Chieh; Tseng, Chih-Wei; Tsai, Shiang-Jiun; Lee, Moon-Sing; Li, Chung-Yi
2015-02-01
The aim of this study is to evaluate the liver metastasis risk among colorectal cancer patients with liver cirrhosis. This was a nationwide population-based cohort study of 2973 newly diagnosed colorectal cancer patients with liver cirrhosis and 11 892 age-sex matched controls enrolled in Taiwan between 2000 and 2010. The cumulative risk by Kaplan-Meier method, hazard ratio by the multivariate Cox proportional model and the incidence density were evaluated. The median time interval from the colorectal cancer diagnosis to the liver metastasis event was 7.42 months for liver cirrhosis group and 7.67 months for non-liver cirrhosis group. The incidence density of liver metastasis was higher in the liver cirrhosis group (61.92/1000 person-years) than in the non-liver cirrhosis group (47.48/1000 person-years), with a significantly adjusted hazard ratio of 1.15 (95% CI = 1.04-1.28, P = 0.007). The 10-year cumulative risk of liver metastasis for the liver cirrhosis and the non-liver cirrhosis group was 27.1 and 23.6%, respectively (P = 0.006). For early cancer stage with locoregional disease patients receiving surgery alone without adjuvant anti-cancer treatments, patients with liver cirrhosis (10-year cumulative risk 23.9 vs. 15.7%, P < 0.001) or cirrhotic symptoms (10-year cumulative risk 25.6 vs. 16.6%, P = 0.009) both still had higher liver metastasis risk compared with their counterparts. For etiologies of liver cirrhosis, the 10-year cumulative risk for hepatitis B virus and hepatitis C virus, hepatitis B virus, hepatitis C virus, other causes and non-liver cirrhosis were 29.5, 28.9, 27.5, 26.7 and 23.4%, respectively, (P = 0.03). Our study found that liver metastasis risk was underestimated and even higher in colorectal cancer patients with liver cirrhosis. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization
Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.
2014-01-01
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406
The priority heuristic: making choices without trade-offs.
Brandstätter, Eduard; Gigerenzer, Gerd; Hertwig, Ralph
2006-04-01
Bernoulli's framework of expected utility serves as a model for various psychological processes, including motivation, moral sense, attitudes, and decision making. To account for evidence at variance with expected utility, the authors generalize the framework of fast and frugal heuristics from inferences to preferences. The priority heuristic predicts (a) the Allais paradox, (b) risk aversion for gains if probabilities are high, (c) risk seeking for gains if probabilities are low (e.g., lottery tickets), (d) risk aversion for losses if probabilities are low (e.g., buying insurance), (e) risk seeking for losses if probabilities are high, (f) the certainty effect, (g) the possibility effect, and (h) intransitivities. The authors test how accurately the heuristic predicts people's choices, compared with previously proposed heuristics and 3 modifications of expected utility theory: security-potential/aspiration theory, transfer-of-attention-exchange model, and cumulative prospect theory. ((c) 2006 APA, all rights reserved).
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Jordan, D; McEwen, S A; Lammerding, A M; McNab, W B; Wilson, J B
1999-06-29
A Monte Carlo simulation model was constructed for assessing the quantity of microbial hazards deposited on cattle carcasses under different pre-slaughter management regimens. The model permits comparison of industry-wide and abattoir-based mitigation strategies and is suitable for studying pathogens such as Escherichia coli O157:H7 and Salmonella spp. Simulations are based on a hierarchical model structure that mimics important aspects of the cattle population prior to slaughter. Stochastic inputs were included so that uncertainty about important input assumptions (such as prevalence of a human pathogen in the live cattle-population) would be reflected in model output. Control options were built into the model to assess the benefit of having prior knowledge of animal or herd-of-origin pathogen status (obtained from the use of a diagnostic test). Similarly, a facility was included for assessing the benefit of re-ordering the slaughter sequence based on the extent of external faecal contamination. Model outputs were designed to evaluate the performance of an abattoir in a 1-day period and included outcomes such as the proportion of carcasses contaminated with a pathogen, the daily mean and selected percentiles of pathogen counts per carcass, and the position of the first infected animal in the slaughter run. A measure of the time rate of introduction of pathogen into the abattoir was provided by assessing the median, 5th percentile, and 95th percentile cumulative pathogen counts at 10 equidistant points within the slaughter run. Outputs can be graphically displayed as frequency distributions, probability densities, cumulative distributions or x-y plots. The model shows promise as an inexpensive method for evaluating pathogen control strategies such as those forming part of a Hazard Analysis and Critical Control Point (HACCP) system.
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.
2016-09-01
Hydrologists and engineers may choose from a range of semidistributed rainfall-runoff models such as VIC, PDM, and TOPMODEL, all of which predict runoff from a distribution of watershed properties. However, these models are not easily compared to event-based data and are missing ready-to-use analytical expressions that are analogous to the SCS-CN method. The SCS-CN method is an event-based model that describes the runoff response with a rainfall-runoff curve that is a function of the cumulative storm rainfall and antecedent wetness condition. Here we develop an event-based probabilistic storage framework and distill semidistributed models into analytical, event-based expressions for describing the rainfall-runoff response. The event-based versions called VICx, PDMx, and TOPMODELx also are extended with a spatial description of the runoff concept of "prethreshold" and "threshold-excess" runoff, which occur, respectively, before and after infiltration exceeds a storage capacity threshold. For total storm rainfall and antecedent wetness conditions, the resulting ready-to-use analytical expressions define the source areas (fraction of the watershed) that produce runoff by each mechanism. They also define the probability density function (PDF) representing the spatial variability of runoff depths that are cumulative values for the storm duration, and the average unit area runoff, which describes the so-called runoff curve. These new event-based semidistributed models and the traditional SCS-CN method are unified by the same general expression for the runoff curve. Since the general runoff curve may incorporate different model distributions, it may ease the way for relating such distributions to land use, climate, topography, ecology, geology, and other characteristics.
Variable order fractional Fokker-Planck equations derived from Continuous Time Random Walks
NASA Astrophysics Data System (ADS)
Straka, Peter
2018-08-01
Continuous Time Random Walk models (CTRW) of anomalous diffusion are studied, where the anomalous exponent β(x) ∈(0 , 1) varies in space. This type of situation occurs e.g. in biophysics, where the density of the intracellular matrix varies throughout a cell. Scaling limits of CTRWs are known to have probability distributions which solve fractional Fokker-Planck type equations (FFPE). This correspondence between stochastic processes and FFPE solutions has many useful extensions e.g. to nonlinear particle interactions and reactions, but has not yet been sufficiently developed for FFPEs of the "variable order" type with non-constant β(x) . In this article, variable order FFPEs (VOFFPE) are derived from scaling limits of CTRWs. The key mathematical tool is the 1-1 correspondence of a CTRW scaling limit to a bivariate Langevin process, which tracks the cumulative sum of jumps in one component and the cumulative sum of waiting times in the other. The spatially varying anomalous exponent is modelled by spatially varying β(x) -stable Lévy noise in the waiting time component. The VOFFPE displays a spatially heterogeneous temporal scaling behaviour, with generalized diffusivity and drift coefficients whose units are length2/timeβ(x) resp. length/timeβ(x). A global change of the time scale results in a spatially varying change in diffusivity and drift. A consequence of the mathematical derivation of a VOFFPE from CTRW limits in this article is that a solution of a VOFFPE can be approximated via Monte Carlo simulations. Based on such simulations, we are able to confirm that the VOFFPE is consistent under a change of the global time scale.
Milani, Mirco; Toscano, Attilio
2013-01-01
This article reports the results of evapotranspiration (ET) experiments carried out in Southern Italy (Sicily) in a pilot-scale constructed wetland (CW) made of a combination of vegetated (Phragmites australis) and unvegetated sub-surface flow beds. Domestic wastewater from a conventional wastewater treatment plant was used to fill the beds. Microclimate data was gathered from an automatic weather station close to the experimental plant. From June to November 2009 and from April to November 2010, ET values were measured as the amount of water needed to restore the initial volume in the beds after a certain period. Cumulative reference evapotranspiration (ET(0)) was similar to the cumulative ET measured in the beds without vegetation (ET(con)), while the Phragmites ET (ET (phr) ) was significantly higher underlining the effect of the vegetation. The plant coefficient of P. australis (K(p)) was very high (up to 8.5 in August 2009) compared to the typical K(c) for agricultural crops suggesting that the wetland environment was subjected to strong "clothesline" and "oasis" effects. According to the FAO 56 approach, K(p) shows different patterns and values in relation to growth stages correlating significantly to stem density, plant height and total leaves. The mean Water Use Efficiency (WUE) value of P. australis was quite low, about 2.27 g L(-1), probably due to the unlimited water availability and the lack of the plant's physiological adaptations to water conservation. The results provide useful and valid information for estimating ET rates in small-scale constructed wetlands since ET is a relevant issue in arid and semiarid regions. In these areas CW feasibility for wastewater treatment and reuse should also be carefully evaluated for macrophytes in relation to their WUE values.
Could offset cluster reveal strong earthquake pattern?——case study from Haiyuan Fault
NASA Astrophysics Data System (ADS)
Ren, Z.; Zhang, Z.; Chen, T.; Yin, J.; Zhang, P. Z.; Zheng, W.; Zhang, H.; Li, C.
2016-12-01
Since 1990s, researchers tried to use offset clusters to study strong earthquake patterns. However, due to the limitation of quantity of offset data, it was not widely used until recent years with the rapid development of high-resolution topographic data, such as remote sensing images, LiDAR. In this study, we use airborne LiDAR data to re-evaluate the cumulative offsets and co-seismic offset of the 1920 Haiyuan Ms 8.5 earthquake along the western and middle segments of the co-seismic surface rupture zone. Our LiDAR data indicate the offset observations along both the western and middle segments fall into five groups. The group with minimum slip amount is associated with the 1920 Haiyuan Ms 8.5 earthquake, which ruptured both the western and middle segments. Our research highlights two new interpretations: firstly, the previously reported maximum displacement of the 1920 Earthquake is likely to be produced by at least two earthquakes; secondly, Our results reveal that the Cumulative Offset Probability Density (COPD) peaks of same offset amount on western segment and middles segment did not corresponding to each other one by one. The ages of the paleoearthquakes indicate the offsets are not accumulated during same period. We suggest that any discussion of the rupture pattern of a certain fault based on the offset data should also consider fault segmentation and paleoseismological data; Therefore, using the COPD peaks for studying the number of palaeo-events and their rupture patterns, the COPD peaks should be computed and analyzed on fault sub-sections and not entire fault zones. Our results reveal that the rupture pattern on the western and middle segment of the Haiyuan Fault is different from each other, which provide new data for the regional seismic potential analysis.
The Surface Density Distribution in the Solar Nebula
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
2004-01-01
The commonly used minimum mass power law representation of the pre-solar nebula is reanalyzed using a new cumulative-mass-model. This model predicts a smoother surface density approximation compared with methods based on direct computation of surface density. The density is quantified using two independent analytical formulations. First, a best-fit transcendental function is applied directly to the basic planetary data. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the solar nebula data. The latter model is shown to be a good approximation to the finite-size early Solar Nebula, and by extension to other extra solar protoplanetary disks.
Quantification of Daily Physical Activity
NASA Technical Reports Server (NTRS)
Whalen, Robert; Breit, Greg; Quintana, Jason
1994-01-01
The influence of physical activity on the maintenance and adaptation of musculoskeletal tissue is difficult to assess. Cumulative musculoskeletal loading is hard to quantify and the attributes of the daily tissue loading history affecting bone metabolism have not been completely identified. By monitoring the vertical component of the daily ground reaction force (GRFz), we have an indirect measure of cumulative daily lower limb musculoskeletal loading to correlate with bone density and structure. The objective of this research is to develop instrumentation and methods of analysis to quantify activity level in terms of the daily history of ground reaction forces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less
Hydrological alteration of the Upper Nakdong river under AR5 climate change scenarios
NASA Astrophysics Data System (ADS)
Kim, S.; Park, Y.; Cha, W. Y.; Okjeong, L.; Choi, J.; Lee, J.
2016-12-01
One of the tasks faced to water engineers is how to consider the climate change impact in our water resources management. Especially in South Korea, where almost all drinking water is taken from major rivers, the public attention is focused on their eco-hydrologic status. In this study, the effect of climate change on eco-hydrologic regime in the Upper Nakdong river which is one of major rivers in South Korea is investigated using SWAT. The simulation results are measured using the indicators of hydrological alteration (IHA) established by U.S. Nature Conservancy. Future climate information is obtained by scaling historical series, provided by Korean Meteorological Administration RCM (KMA RCM) and four RCP scenarios. KMA RCM has 12.5-km spatial resolution in Korean Peninsula and is produced by UK Hedley Centre regional climate model HadGEM3-RA. The RCM bias is corrected by the Kernel density distribution mapping (KDDM) method. The KDDM estimates the cumulative probability density function (CDF) of each dataset using kernel density estimation, and is implemented by quantile-mapping the CDF of a present climate variable obtained from the RCM onto that of the corresponding observed climate variable. Although the simulation results from different RCP scenarios show diverse hydrologic responses in our watershed, the mainstream of future simulation results indicate that there will be more river flow in southeast Korea. The predicted impacts of hydrological alteration caused by climate change on the aquatic ecosystem in the Upper Nakdong river will be presented. Acknowledgement This research was supported by a grant(14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Corneal inflammatory events with daily silicone hydrogel lens wear.
Szczotka-Flynn, Loretta; Jiang, Ying; Raghupathy, Sangeetha; Bielefeld, Roger A; Garvey, Matthew T; Jacobs, Michael R; Kern, Jami; Debanne, Sara M
2014-01-01
This study aimed to determine the probability and risk factors for developing a corneal inflammatory event (CIE) during daily wear of lotrafilcon A silicone hydrogel contact lenses. Eligible participants (n = 218) were fit with lotrafilcon A lenses for daily wear and followed up for 12 months. Participants were randomized to either a polyhexamethylene biguanide-preserved multipurpose solution or a one-step peroxide disinfection system. The main exposures of interest were bacterial contamination of lenses, cases, lid margins, and ocular surface. Kaplan-Meier (KM) plots were used to estimate the cumulative unadjusted probability of remaining free from a CIE, and multivariate Cox proportional hazards regression was used to model the hazard of experiencing a CIE. The KM unadjusted cumulative probability of remaining free from a CIE for both lens care groups combined was 92.3% (95% confidence interval [CI], 88.1 to 96.5%). There was one participant with microbial keratitis, five participants with asymptomatic infiltrates, and seven participants with contact lens peripheral ulcers, providing KM survival estimates of 92.8% (95% CI, 88.6 to 96.9%) and 98.1% (95% CI, 95.8 to 100.0%) for remaining free from noninfectious and symptomatic CIEs, respectively. The presence of substantial (>100 colony-forming units) coagulase-negative staphylococci bioburden on lid margins was associated with about a five-fold increased risk for the development of a CIE (p = 0.04). The probability of experiencing a CIE during daily wear of lotrafilcon A contact lenses is low, and symptomatic CIEs are rare. Patient factors, such as high levels of bacterial bioburden on lid margins, contribute to the development of noninfectious CIEs during daily wear of silicone hydrogel lenses.
Rothschild, Anthony J.; Dunlop, Boadie W.; Dunner, David L.; Friedman, Edward S.; Gelenberg, Alan; Holland, Peter; Kocsis, James H.; Kornstein, Susan G.; Shelton, Richard; Trivedi, Madhukar H.; Zajecka, John M.; Goldstein, Corey; Thase, Michael E.; Pedersen, Ron; Keller, Martin B.
2013-01-01
Background Antidepressant tachyphylaxis describes the return of apathetic depressive symptoms, such as fatigue and decreased motivation, despite continued use of a previously effective treatment. Methods Data were collected from a multiphase, double-blind, placebo-controlled study that assessed the efficacy of venlafaxine extended release (ER) during 2 sequential 1-year maintenance phases (A and B) in patients with recurrent major depressive disorder (MDD). The primary outcome was the cumulative probability of tachyphylaxis in patients receiving venlafaxine ER, fluoxetine, or placebo. Tachyphylaxis was defined as Rothschild Scale for Antidepressant Tachyphylaxis (RSAT) scored ≥ 7 in patients with prior satisfactory therapeutic response. A Kaplan-Meier estimate of the cumulative probability of not experiencing tachyphylaxis, and a 2-sided Fisher exact test was used to assess the relationship between tachyphylaxis and recurrence. Results The maintenance phase A population was comprised of 337 patients (venlafaxine ER [n = 129], fluoxetine [n = 79], placebo [n = 129]), whereas 128 patients (venlafaxine ER [n = 43], fluoxetine [n = 45], placebo [n = 40]) were treated during maintenance phase B. No difference in the probability of experiencing tachyphylaxis were observed between the active treatment groups during either maintenance phase; however, a significant difference between venlafaxine ER and placebo was observed at the completion of maintenance phase A. A significant relationship between tachyphylaxis and recurrence was observed. Limitations Despite demonstrating psychometric validity and reliability, the current definition of tachyphylaxis has not been widely studied Conclusions Although no significant differences were observed in the probability of tachyphylaxis among patients receiving active treatment, the relationship between tachyphylaxis and recurrence suggests that tachyphylaxis may be a predrome of recurrence. PMID:19752838
NASA Astrophysics Data System (ADS)
Duchesne, J. C.; Charlier, B.
2005-08-01
Whole-rock major element compositions are investigated in 99 cumulates from the Proterozoic Bjerkreim-Sokndal layered intrusion (Rogaland Anorthosite Province, SW Norway), which results from the crystallization of a jotunite (Fe-Ti-P-rich hypersthene monzodiorite) parental magma. The scattering of cumulate compositions covers three types of cumulates: (1) ilmenite-leuconorite with plagioclase, ilmenite and Ca-poor pyroxene as cumulus minerals, (2) magnetite-leuconorite with the same minerals plus magnetite, and (3) gabbronorite made up of plagioclase, Ca-poor and Ca-rich pyroxenes, ilmenite, Ti-magnetite and apatite. Each type of cumulate displays a linear trend in variation diagrams. One pole of the linear trends is represented by plagioclase, and the other by a mixture of the mafic minerals in constant proportion. The mafic minerals were not sorted during cumulate formation though they display large density differences. This suggests that crystal settling did not operate during cumulate formation, and that in situ crystallization with variable nucleation rate for plagioclase was the dominant formation mechanism. The trapped liquid fraction of the cumulate plays a negligible role for the cumulate major element composition. Each linear trend is a locus for the cotectic composition of the cumulates. This property permits reconstruction by graphical mass balance calculation of the first two stages of the liquid line of descent, starting from a primitive jotunite, the Tjörn parental magma. Another type of cumulate, called jotunite cumulate and defined by the mineral association from the Transition Zone of the intrusion, has to be subtracted to simulate the most evolved part of the liquid line of descent. The proposed model demonstrates that average cumulate compositions represent cotectic compositions when the number of samples is large (> 40). The model, however, does not account for the K 2O evolution, suggesting that the system was open to contamination by roof melts. The liquid line of descent corresponding to the Bjerkreim-Sokndal cumulates differs slightly from that obtained for jotunitic dykes in that the most Ti-, P- and Fe-rich melts (evolved jotunite) are lacking. The constant composition of the mafic poles during intervals where cryptic layering is conspicuous is explained by a compositional balance between the Fe-Ti oxide minerals, which decrease in Fe content in favour of Ti, and the pyroxenes which increase in Fe.
NASA Astrophysics Data System (ADS)
Indriyani, N.; Tridjaja, B.; Medise, B. E.; Kurniati, N.
2017-08-01
Systemic lupus erythematosus (SLE) is an autoimmune disease affecting children; its morbidity and mortality rates are significant. One risk factor for morbidity is chronic corticosteroid use. The aim of this study is to determine the occurrence rate of low bone mineral density; discuss the characteristics, including cumulative and daily doses of corticosteroid, body mass index, Systemic Lupus Erythematosus Disease Activity Index (SLEDAI), calcium, and vitamin D intake; and assess bone metabolism laboratory parameters, including serum calcium, vitamin D, alkaline phosphatase (ALP), phosphorus, and cortisol among children with SLE receiving corticosteroids. This was a descriptive, cross-sectional study involving 16 children with SLE attending the child and adolescent outpatient clinic at Cipto Mangunkusumo Hospital in November-December 2016. Low bone mineral density occurred among 7/16 patients. The mean total bone mineral density was 0.885 ± 0.09 g/cm2. Children with SLE receiving corticosteroid had low calcium (8.69 ± 0.50 mg/dl), vitamin D (19.3 ± 5.4 mg/dl), ALP (79.50 [43.00-164.00] U/l), and morning cortisol level (1.20 [0.0-10.21] ug/dl), as well as calcium (587.58 ± 213.29 mg/d) and vitamin D (2.9 [0-31.8] mcg/d) intake. The occurrence of low bone mineral density was observed among children with SLE receiving corticosteroid treatment. Low bone mineral density tends to occur among patients with higher cumulative doses and longer duration of corticosteroid treatments.
The estimated lifetime probability of acquiring human papillomavirus in the United States.
Chesson, Harrell W; Dunne, Eileen F; Hariri, Susan; Markowitz, Lauri E
2014-11-01
Estimates of the lifetime probability of acquiring human papillomavirus (HPV) can help to quantify HPV incidence, illustrate how common HPV infection is, and highlight the importance of HPV vaccination. We developed a simple model, based primarily on the distribution of lifetime numbers of sex partners across the population and the per-partnership probability of acquiring HPV, to estimate the lifetime probability of acquiring HPV in the United States in the time frame before HPV vaccine availability. We estimated the average lifetime probability of acquiring HPV among those with at least 1 opposite sex partner to be 84.6% (range, 53.6%-95.0%) for women and 91.3% (range, 69.5%-97.7%) for men. Under base case assumptions, more than 80% of women and men acquire HPV by age 45 years. Our results are consistent with estimates in the existing literature suggesting a high lifetime probability of HPV acquisition and are supported by cohort studies showing high cumulative HPV incidence over a relatively short period, such as 3 to 5 years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
Greenhouse-gas emission targets for limiting global warming to 2 degrees C.
Meinshausen, Malte; Meinshausen, Nicolai; Hare, William; Raper, Sarah C B; Frieler, Katja; Knutti, Reto; Frame, David J; Allen, Myles R
2009-04-30
More than 100 countries have adopted a global warming limit of 2 degrees C or below (relative to pre-industrial levels) as a guiding principle for mitigation efforts to reduce climate change risks, impacts and damages. However, the greenhouse gas (GHG) emissions corresponding to a specified maximum warming are poorly known owing to uncertainties in the carbon cycle and the climate response. Here we provide a comprehensive probabilistic analysis aimed at quantifying GHG emission budgets for the 2000-50 period that would limit warming throughout the twenty-first century to below 2 degrees C, based on a combination of published distributions of climate system properties and observational constraints. We show that, for the chosen class of emission scenarios, both cumulative emissions up to 2050 and emission levels in 2050 are robust indicators of the probability that twenty-first century warming will not exceed 2 degrees C relative to pre-industrial temperatures. Limiting cumulative CO(2) emissions over 2000-50 to 1,000 Gt CO(2) yields a 25% probability of warming exceeding 2 degrees C-and a limit of 1,440 Gt CO(2) yields a 50% probability-given a representative estimate of the distribution of climate system properties. As known 2000-06 CO(2) emissions were approximately 234 Gt CO(2), less than half the proven economically recoverable oil, gas and coal reserves can still be emitted up to 2050 to achieve such a goal. Recent G8 Communiqués envisage halved global GHG emissions by 2050, for which we estimate a 12-45% probability of exceeding 2 degrees C-assuming 1990 as emission base year and a range of published climate sensitivity distributions. Emissions levels in 2020 are a less robust indicator, but for the scenarios considered, the probability of exceeding 2 degrees C rises to 53-87% if global GHG emissions are still more than 25% above 2000 levels in 2020.
Kleinman, Daniel; Runnqvist, Elin; Ferreira, Victor S.
2015-01-01
Comprehenders predict upcoming speech and text on the basis of linguistic input. How many predictions do comprehenders make for an upcoming word? If a listener strongly expects to hear the word “sock”, is the word “shirt” partially expected as well, is it actively inhibited, or is it ignored? The present research addressed these questions by measuring the “downstream” effects of prediction on the processing of subsequently presented stimuli using the cumulative semantic interference paradigm. In three experiments, subjects named pictures (sock) that were presented either in isolation or after strongly constraining sentence frames (“After doing his laundry, Mark always seemed to be missing one…”). Naming sock slowed the subsequent naming of the picture shirt – the standard cumulative semantic interference effect. However, although picture naming was much faster after sentence frames, the interference effect was not modulated by the context (bare vs. sentence) in which either picture was presented. According to the only model of cumulative semantic interference that can account for such a pattern of data, this indicates that comprehenders pre-activated and maintained the pre-activation of best sentence completions (sock) but did not maintain the pre-activation of less likely completions (shirt). Thus, comprehenders predicted only the most probable completion for each sentence. PMID:25917550
Hamada, Tsuyoshi; Nakai, Yousuke; Isayama, Hiroyuki; Togawa, Osamu; Kogure, Hirofumi; Kawakubo, Kazumichi; Tsujino, Takeshi; Sasahira, Naoki; Hirano, Kenji; Yamamoto, Natsuyo; Ito, Yukiko; Sasaki, Takashi; Mizuno, Suguru; Toda, Nobuo; Tada, Minoru; Koike, Kazuhiko
2014-03-01
Self-expandable metallic stent (SEMS) placement is widely carried out for distal malignant biliary obstruction, and survival analysis is used to evaluate the cumulative incidences of SEMS dysfunction (e.g. the Kaplan-Meier [KM] method and the log-rank test). However, these statistical methods might be inappropriate in the presence of 'competing risks' (here, death without SEMS dysfunction), which affects the probability of experiencing the event of interest (SEMS dysfunction); that is, SEMS dysfunction can no longer be observed after death. A competing risk analysis has rarely been done in studies on SEMS. We introduced the concept of a competing risk analysis and illustrated its impact on the evaluation of SEMS outcomes using hypothetical and actual data. Our illustrative study included 476 consecutive patients who underwent SEMS placement for unresectable distal malignant biliary obstruction. A significant difference between cumulative incidences of SEMS dysfunction in male and female patients via theKM method (P = 0.044 by the log-rank test) disappeared after applying a competing risk analysis (P = 0.115 by Gray's test). In contrast, although cumulative incidences of SEMS dysfunction via the KM method were similar with and without chemotherapy (P = 0.647 by the log-rank test), cumulative incidence of SEMS dysfunction in the non-chemotherapy group was shown to be significantly lower (P = 0.031 by Gray's test) in a competing risk analysis. Death as a competing risk event needs to be appropriately considered in estimating a cumulative incidence of SEMS dysfunction, otherwise analytical results may be biased. © 2013 The Authors. Digestive Endoscopy © 2013 Japan Gastroenterological Endoscopy Society.
User Guide to the Aircraft Cumulative Probability Chart Template
2009-07-01
Technology Organisation *AeroStructures Technologies DSTO-TR-2332 ABSTRACT To ensure aircraft structural integrity is maintained to an acceptable level...cracking (or failure) which may be used to assess the life of aircraft structures . RELEASE LIMITATION Approved for public release Report...ADDRESS(ES) DSTO Defence Science and Technology Organisation ,506 Lorimer St,Fishermans Bend Victoria 3207 Australia, , , 8. PERFORMING ORGANIZATION
12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327
Code of Federal Regulations, 2011 CFR
2011-01-01
... one year; • Minimum and maximum downgrade probability cutoff values, based on data from June 30, 2008... rate factor (Ai,T) is calculated by subtracting 0.4 from the four-year cumulative gross asset growth... weighted average of five component ratings excluding the “S” component. Delinquency and non-accrual data on...
NASA Astrophysics Data System (ADS)
Yan, Ying; Zhang, Shen; Tang, Jinjun; Wang, Xiaofei
2017-07-01
Discovering dynamic characteristics in traffic flow is the significant step to design effective traffic managing and controlling strategy for relieving traffic congestion in urban cities. A new method based on complex network theory is proposed to study multivariate traffic flow time series. The data were collected from loop detectors on freeway during a year. In order to construct complex network from original traffic flow, a weighted Froenius norm is adopt to estimate similarity between multivariate time series, and Principal Component Analysis is implemented to determine the weights. We discuss how to select optimal critical threshold for networks at different hour in term of cumulative probability distribution of degree. Furthermore, two statistical properties of networks: normalized network structure entropy and cumulative probability of degree, are utilized to explore hourly variation in traffic flow. The results demonstrate these two statistical quantities express similar pattern to traffic flow parameters with morning and evening peak hours. Accordingly, we detect three traffic states: trough, peak and transitional hours, according to the correlation between two aforementioned properties. The classifying results of states can actually represent hourly fluctuation in traffic flow by analyzing annual average hourly values of traffic volume, occupancy and speed in corresponding hours.
Rain attenuation measurements: Variability and data quality assessment
NASA Technical Reports Server (NTRS)
Crane, Robert K.
1989-01-01
Year to year variations in the cumulative distributions of rain rate or rain attenuation are evident in any of the published measurements for a single propagation path that span a period of several years of observation. These variations must be described by models for the prediction of rain attenuation statistics. Now that a large measurement data base has been assembled by the International Radio Consultative Committee, the information needed to assess variability is available. On the basis of 252 sample cumulative distribution functions for the occurrence of attenuation by rain, the expected year to year variation in attenuation at a fixed probability level in the 0.1 to 0.001 percent of a year range is estimated to be 27 percent. The expected deviation from an attenuation model prediction for a single year of observations is estimated to exceed 33 percent when any of the available global rain climate model are employed to estimate the rain rate statistics. The probability distribution for the variation in attenuation or rain rate at a fixed fraction of a year is lognormal. The lognormal behavior of the variate was used to compile the statistics for variability.
Banno, S; Matsumoto, Y; Naniwa, T; Hayami, Y; Sugiura, Y; Yoshinouchi, T; Ueda, R
2002-12-01
Abstract We evaluated bone mineral density (BMD) in Japanese female patients with systemic lupus erythematosus (SLE) and assessed the influence of the use of glucocorticoids. Lumbar BMD was measured by dual x-ray absorptiometry (DXA) in 60 premenopausal females who previously had been receiving glucocorticoid therapy. Therapeutic- and disease-related variables for SLE were analyzed and bone resorption or formation markers were measured. Osteoporosis was defined as a T-score below 2.5 SD by DXA; 12 patients (20%) showed osteoporosis, and 30 (50%) had osteopenia. Compared with the nonosteoporotic group (n = 48), the osteoporotic group (n = 12) had a significantly longer duration of glucocorticoid treatment (P = 0.01), a cumulative prednisolone dose (P = 0.002), and an SLE damage index (SLICC/ACR). There was no difference in the incidence of osteoporosis either with or without the previous use of methyl-prednisolone pulse or immunosuppressive drugs. There was a significant positive correlation between urinary type I collagen cross-linked N-telopeptides (NTx) and serum bone-specific alkaline phosphatase (BAP) (r = 0.404, P = 0.002), but these bone metabolic markers showed no difference between the osteoporotic and nonosteoporotic groups. A good significant negative correlation was shown between BMD and the cumulative glucocorticoid dose (r = -0.351, P = 0.007). Stepwise logistic regression analysis showed that the cumulative glucocorticoid intake was independently associated with osteoporosis. Glucocorticoid-induced osteoporosis was frequently observed in Japanese SLE patients, as in Caucasian populations. The cumulative glucocorticoid dose was associated with an increased risk for osteoporosis. Bone metabolic markers such as NTx and BAP were not influenced by glucocorticoid treatment and could not predict current osteoporosis in SLE patients.
Impaired ambulation and steroid therapy impact negatively on bone health in multiple sclerosis.
Tyblova, M; Kalincik, T; Zikan, V; Havrdova, E
2015-04-01
The prevalence of osteopenia and osteoporosis is higher amongst patients with multiple sclerosis in comparison with the general population. In addition to the general determinants of bone health, two factors may contribute to reduced bone mineral density in multiple sclerosis: physical disability and corticosteroid therapy. The aim of this study was to examine the effect of physical disability and steroid exposure on bone health in weight-bearing bones and spine and on the incidence of low-trauma fractures in multiple sclerosis. In this retrospective analysis of prospectively collected data, associations between bone mineral density (at the femoral neck, total femur and the lumbar spine) and its change with disability or cumulative steroid dose were evaluated with random-effect models adjusted for demographic and clinical determinants of bone health. The incidence of low-trauma fractures during the study follow-up was evaluated with Andersen-Gill models. Overall, 474 and 438 patients were included in cross-sectional and longitudinal analyses (follow-up 2347 patient-years), respectively. The effect of severely impaired gait was more apparent in weight-bearing bones (P ≤ 10(-15) ) than in spine (P = 0.007). The effect of cumulative steroid dose was relatively less pronounced but diffuse (P ≤ 10(-4) ). Risk of low-trauma fractures was associated with disability (P = 0.02) but not with cumulative steroid exposure and was greater amongst patients with severely impaired gait (annual risk 3.5% vs. 3.0%). Synergistic effects were found only between cumulative steroid dose in patients ambulatory without support (P = 0.02). Bone health and the incidence of low-trauma fractures in multiple sclerosis are more related to impaired gait than to extended corticosteroid therapy. © 2014 The Author(s) European Journal of Neurology © 2014 EAN.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Kass-Iliyya, Lewis; Javed, Saad; Gosal, David; Kobylecki, Christopher; Marshall, Andrew; Petropoulos, Ioannis N; Ponirakis, Georgios; Tavakoli, Mitra; Ferdousi, Maryam; Chaudhuri, Kallol Ray; Jeziorska, Maria; Malik, Rayaz A; Silverdale, Monty A
2015-12-01
Autonomic and somatic denervation is well established in Parkinson's disease (PD). (1) To determine whether corneal confocal microscopy (CCM) can non-invasively demonstrate small nerve fiber damage in PD. (2) To identify relationships between corneal nerve parameters, intraepidermal nerve fiber density (IENFD) and clinical features of PD. Twenty-six PD patients and 26 controls underwent CCM of both eyes. 24/26 PD patients and 10/26 controls underwent skin biopsies from the dorsa of both feet. PD patients underwent assessment of parasympathetic function [deep breathing heart rate variability (DB-HRV)], autonomic symptoms [scale for outcomes in Parkinson's disease - autonomic symptoms (SCOPA-AUT)], motor symptoms [UPDRS-III "ON"] and cumulative Levodopa dose. PD patients had significantly reduced corneal nerve fiber density (CNFD) with increased corneal nerve branch density (CNBD) and corneal nerve fiber length (CNFL) compared to controls. CNBD and CNFL but not CNFD correlated inversely with UPDRS-III and SCOPA-AUT. All CCM parameters correlated strongly with DB-HRV. There was no correlation between CCM parameters and disease duration, cumulative Levodopa dose or pain. IENFD was significantly reduced in PD compared to controls and correlated with CNFD and UPDRS-III. However, unlike CCM measures, IENFD correlated with disease duration and cumulative Levodopa dose but not with autonomic dysfunction. CCM identifies corneal nerve fiber pathology, which correlates with autonomic symptoms, parasympathetic deficits and motor scores in patients with PD. IENFD is also reduced and correlates with CNFD and motor symptoms but not parasympathetic deficits, indicating it detects different aspects of peripheral nerve pathology in PD. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Li, Ni; Xu, Jing-Hang; Yu, Min; Wang, Sa; Si, Chong-Wen; Yu, Yan-Yan
2015-01-01
AIM: To investigate whether long-term low-level hepatitis B virus (HBV) DNA influences dynamic changes of the FIB-4 index in chronic hepatitis B (CHB) patients receiving entecavir (ETV) therapy with partial virological responses. METHODS: We retrospectively analyzed 231 nucleos(t)ide (NA) naïve CHB patients from our previous study (NCT01926288) who received continuous ETV or ETV maleate therapy for three years. The patients were divided into partial virological response (PVR) and complete virological response (CVR) groups according to serum HBV DNA levels at week 48. Seventy-six patients underwent biopsies at baseline and at 48 wk. The performance of the FIB-4 index and area under the receiver operating characteristic (AUROC) curve for predicting fibrosis were determined for the patients undergoing biopsy. The primary objective of the study was to compare the cumulative probabilities of virological responses between the two groups during the treatment period. The secondary outcome was to observe dynamic changes of the FIB-4 index between CVR patients and PVR patients. RESULTS: For hepatitis B e antigen (HBeAg)-positive patients (n = 178), the cumulative probability of achieving undetectable levels at week 144 was 95% and 69% for CVR and PVR patients, respectively (P < 0.001). In the Cox proportional hazards model, a lower pretreatment serum HBV DNA level was an independent factor predicting maintained viral suppression. The cumulative probability of achieving undetectable levels of HBV DNA for HBeAg-negative patients (n = 53) did not differ between the two groups. The FIB-4 index efficiently identified fibrosis, with an AUROC of 0.80 (95%CI: 0.69-0.89). For HBeAg-positive patients, the FIB-4 index was higher in CVR patients than in PVR patients at baseline (1.89 ± 1.43 vs 1.18 ± 0.69, P < 0.001). There was no significant difference in the reduction of the FIB-4 index between the CVR and PVR groups from weeks 48 to 144 (-0.11 ± 0.47 vs -0.13 ± 0.49, P = 0.71). At week 144, the FIB-4 index levels were similar between the two groups (1.24 ± 0.87 vs 1.02 ± 0.73, P = 0.06). After multivariate logistic regression analysis, a lower baseline serum HBV DNA level was associated with improvement of liver fibrosis. In HBeAg-negative patients, the FIB-4 index did not differ between the two groups. CONCLUSION: The cumulative probabilities of HBV DNA responses showed significant differences between CVR and PVR HBeAg-positive CHB patients undergoing entecavir treatment for 144 wk. However, long-term low-level HBV DNA did not deteriorate the FIB-4 index, which was used to evaluate liver fibrosis, at the end of three years. PMID:26604649
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
Making Semantic Waves: A Key to Cumulative Knowledge-Building
ERIC Educational Resources Information Center
Maton, Karl
2013-01-01
The paper begins by arguing that knowledge-blindness in educational research represents a serious obstacle to understanding knowledge-building. It then offers sociological concepts from Legitimation Code Theory--"semantic gravity" and "semantic density"--that systematically conceptualize one set of organizing principles underlying knowledge…
NASA Astrophysics Data System (ADS)
Wright, Robyn; Thornberg, Steven M.
SEDIDAT is a series of compiled IBM-BASIC (version 2.0) programs that direct the collection, statistical calculation, and graphic presentation of particle settling velocity and equivalent spherical diameter for samples analyzed using the settling tube technique. The programs follow a menu-driven format that is understood easily by students and scientists with little previous computer experience. Settling velocity is measured directly (cm,sec) and also converted into Chi units. Equivalent spherical diameter (reported in Phi units) is calculated using a modified Gibbs equation for different particle densities. Input parameters, such as water temperature, settling distance, particle density, run time, and Phi;Chi interval are changed easily at operator discretion. Optional output to a dot-matrix printer includes a summary of moment and graphic statistical parameters, a tabulation of individual and cumulative weight percents, a listing of major distribution modes, and cumulative and histogram plots of a raw time, settling velocity. Chi and Phi data.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
Non-Invasive Investigation of Bone Adaptation in Humans to Mechanical Loading
NASA Technical Reports Server (NTRS)
Whalen, R.
1999-01-01
Experimental studies have identified peak cyclic forces, number of loading cycles, and loading rate as contributors to the regulation of bone metabolism. We have proposed a theoretical model that relates bone density to a mechanical stimulus derived from average daily cumulative peak cyclic 'effective' tissue stresses. In order to develop a non-invasive experimental model to test the theoretical model we need to: (1) monitor daily cumulative loading on a bone, (2) compute the internal stress state(s) resulting from the imposed loading, and (3) image volumetric bone density accurately, precisely, and reproducibly within small contiguous volumes throughout the bone. We have chosen the calcaneus (heel) as an experimental model bone site because it is loaded by ligament, tendon and joint contact forces in equilibrium with daily ground reaction forces that we can measure; it is a peripheral bone site and therefore more easily and accurately imaged with computed tomography; it is composed primarily of cancellous bone; and it is a relevant site for monitoring bone loss and adaptation in astronauts and the general population. This paper presents an overview of our recent advances in the areas of monitoring daily ground reaction forces, biomechanical modeling of the forces on the calcaneus during gait, mathematical modeling of calcaneal bone adaptation in response to cumulative daily activity, accurate and precise imaging of the calcaneus with quantitative computed tomography (QCT), and application to long duration space flight.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Origins and implications of the relationship between warming and cumulative carbon emissions
NASA Astrophysics Data System (ADS)
Raupach, M. R.; Davis, S. J.; Peters, G. P.; Andrew, R. M.; Canadell, J.; Le Quere, C.
2014-12-01
A near-linear relationship between warming (T) and cumulative carbon emissions (Q) is a robust finding from numerous studies. This finding opens biophysical questions concerning (1) its theoretical basis, (2) the treatment of non-CO2 forcings, and (3) uncertainty specifications. Beyond these biophysical issues, a profound global policy question is raised: (4) how can a quota on cumulative emissions be shared? Here, an integrated survey of all four issues is attempted. (1) Proportionality between T and Q is an emergent property of a linear carbon-climate system forced by exponentially increasing CO2 emissions. This idealisation broadly explains past but not future near-proportionality between T and Q: in future, the roles of non-CO2 forcings and carbon-climate nonlinearities become important, and trajectory dependence becomes stronger. (2) The warming effects of short-lived non-CO2 forcers depend on instantaneous rather than cumulative fluxes. However, inertia in emissions trajectories reinstates some of the benefits of a cumulative emissions approach, with residual trajectory dependence comparable to that for CO2. (3) Uncertainties arise from several sources: climate projections, carbon-climate feedbacks, and residual trajectory dependencies in CO2 and other emissions. All of these can in principle be combined into a probability distribution P(T|Q) for the warming T from given cumulative CO2 emissions Q. Present knowledge of P(T|Q) allows quantification of the tradeoff between mitigation ambition and climate risk. (4) Cumulative emissions consistent with a given warming target and climate risk are a finite common resource that will inevitably be shared, creating a tragedy-of-the-commons dilemma. Sharing options range from "inertia" (present distribution of emissions is maintained) to "equity" (cumulative emissions are distributed equally per-capita). Both extreme options lead to emissions distributions that are unrealisable in practice, but a blend of the two extremes may be realisable. This perspective provides a means for nations to compare the global consequences of their own proposed emissions quotas if others were to act in a consistent way, a critical step towards achieving consensus.
Role of olivine cumulates in destabilizing the flanks of Hawaiian volcanoes
Clague, D.A.; Denlinger, R.P.
1994-01-01
The south flank of Kilauea Volcano is unstable and has the structure of a huge landslide; it is one of at least 17 enormous catastrophic landslides shed from the Hawaiian Islands. Mechanisms previously proposed for movement of the south flank invoke slip of the volcanic pile over seafloor sediments. Slip on a low friction de??collement alone cannot explain why the thickest and widest sector of the flank moves more rapidly than the rest, or why this section contains a 300 km3 aseismic volume above the seismically defined de??collement. It is proposed that this aseismic volume, adjacent to the caldera in the direction of flank slip, consists of olivine cumulates that creep outward, pushing the south flank seawards. Average primary Kilauea tholeiitic magma contains about 16.5 wt.% MgO compared with an average 10 wt.% MgO for erupted subaerial and submarine basalts. This difference requires fractionation of 17 wt.% (14 vol.%) olivine phenocrysts that accumulate near the base of the magma reservoir where they form cumulates. Submarine-erupted Kilauea lavas contain abundant deformed olivine xenocrysts derived from these cumulates. Deformed dunite formed during the tholeiitic shield stage is also erupted as xenoliths in subsequent alkalic lavas. The deformation structures in olivine xenocrysts suggest that the cumulus olivine was densely packed, probably with as little as 5-10 vol.% intercumulus liquid, before entrainment of the xenocrysts. The olivine cumulates were at magmatic temperatures (>1100??C) when the xenocrysts were entrained. Olivine at 1100??C has a rheology similar to ice, and the olivine cumulates should flow down and away from the summit of the volcano. Flow of the olivine cumulates places constant pressure on the unbuttressed seaward flank, leading to an extensional region that localizes deep intrusions behind the flank; these intrusions add to the seaward push. This mechanism ties the source of gravitational instability to the caldera complex and deep rift systems and, therefore, limits catastrophic sector failure of Hawaiian volcanoes to their active growth phase, when the core of olivine cumulates is still hot enough to flow. ?? 1994 Springer-Verlag.
Effects of ultraviolet radiation and contaminant-related stressors on arctic freshwater ecosystems.
Wrona, Frederick J; Prowse, Terry D; Reist, James D; Hobbie, John E; Lévesque, Lucie M J; Macdonald, Robie W; Vincent, Warwick F
2006-11-01
Climate change is likely to act as a multiple stressor, leading to cumulative and/or synergistic impacts on aquatic systems. Projected increases in temperature and corresponding alterations in precipitation regimes will enhance contaminant influxes to aquatic systems, and independently increase the susceptibility of aquatic organisms to contaminant exposure and effects. The consequences for the biota will in most cases be additive (cumulative) and multiplicative (synergistic). The overall result will be higher contaminant loads and biomagnification in aquatic ecosystems. Changes in stratospheric ozone and corresponding ultraviolet radiation regimes are also expected to produce cumulative and/or synergistic effects on aquatic ecosystem structure and function. Reduced ice cover is likely to have a much greater effect on underwater UV radiation exposure than the projected levels of stratospheric ozone depletion. A major increase in UV radiation levels will cause enhanced damage to organisms (biomolecular, cellular, and physiological damage, and alterations in species composition). Allocations of energy and resources by aquatic biota to UV radiation protection will increase, probably decreasing trophic-level productivity. Elemental fluxes will increase via photochemical pathways.
The 1/ N Expansion of Tensor Models Beyond Perturbation Theory
NASA Astrophysics Data System (ADS)
Gurau, Razvan
2014-09-01
We analyze in full mathematical rigor the most general quartically perturbed invariant probability measure for a random tensor. Using a version of the Loop Vertex Expansion (which we call the mixed expansion) we show that the cumulants write as explicit series in 1/ N plus bounded rest terms. The mixed expansion recasts the problem of determining the subleading corrections in 1/ N into a simple combinatorial problem of counting trees decorated by a finite number of loop edges. As an aside, we use the mixed expansion to show that the (divergent) perturbative expansion of the tensor models is Borel summable and to prove that the cumulants respect an uniform scaling bound. In particular the quartically perturbed measures fall, in the N→ ∞ limit, in the universality class of Gaussian tensor models.
Ng, Siew C; Zeng, Zhirong; Niewiadomski, Ola; Tang, Whitney; Bell, Sally; Kamm, Michael A; Hu, Pinjin; de Silva, H Janaka; Niriella, Madunil A; Udara, W S A A Yasith; Ong, David; Ling, Khoon Lin; Ooi, Choon Jin; Hilmi, Ida; Lee Goh, Khean; Ouyang, Qin; Wang, Yu Fang; Wu, Kaichun; Wang, Xin; Pisespongsa, Pises; Manatsathit, Sathaporn; Aniwan, Satimai; Limsrivilai, Julajak; Gunawan, Jeffri; Simadibrata, Marcellus; Abdullah, Murdani; Tsang, Steve W C; Lo, Fu Hang; Hui, Aric J; Chow, Chung Mo; Yu, Hon Ho; Li, Mo Fong; Ng, Ka Kei; Ching, Jessica Y L; Chan, Victor; Wu, Justin C Y; Chan, Francis K L; Chen, Minhu; Sung, Joseph J Y
2016-01-01
The incidence of inflammatory bowel disease (IBD) is increasing in Asia, but little is known about disease progression in this region. The Asia-Pacific Crohn's and Colitis Epidemiology Study was initiated in 2011, enrolling subjects from 8 countries in Asia (China, Hong Kong, Indonesia, Sri Lanka, Macau, Malaysia, Singapore, and Thailand) and Australia. We present data from this ongoing study. We collected data on 413 patients diagnosed with IBD (222 with ulcerative colitis [UC], 181 with Crohn's disease [CD], 10 with IBD unclassified; median age, 37 y) from 2011 through 2013. We analyzed the disease course and severity and mortality. Risks for medical and surgical therapies were assessed using Kaplan-Meier analysis. The cumulative probability that CD would change from inflammatory to stricturing or penetrating disease was 19.6%. The cumulative probabilities for use of immunosuppressants or anti-tumor necrosis factor agents were 58.9% and 12.0% for patients with CD, and 12.7% and 0.9% for patients with UC, respectively. Perianal CD was associated with an increased risk of anti-tumor necrosis factor therapy within 1 year of its diagnosis (hazard ratio, 2.97; 95% confidence interval, 1.09-8.09). The cumulative probabilities for surgery 1 year after diagnosis were 9.1% for patients with CD and 0.9% for patients with UC. Patients with CD and penetrating disease had a 7-fold increase for risk of surgery, compared with patients with inflammatory disease (hazard ratio, 7.67; 95% confidence interval, 3.93-14.96). The overall mortality for patients with IBD was 0.7%. In a prospective population-based study, we found that the early course of disease in patients with IBD in Asia was comparable with that of the West. Patients with CD frequently progress to complicated disease and have accelerated use of immunosuppressants. Few patients with early stage UC undergo surgery in Asia. Increasing our understanding of IBD progression in different populations can help optimize therapy and improve outcomes. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All rights reserved.
flexsurv: A Platform for Parametric Survival Modeling in R
Jackson, Christopher H.
2018-01-01
flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450
Diversity Performance Analysis on Multiple HAP Networks.
Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue
2015-06-30
One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques.
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
Kuselman, Ilya; Pennecchi, Francesca R; da Silva, Ricardo J N B; Hibbert, D Brynn
2017-11-01
The probability of a false decision on conformity of a multicomponent material due to measurement uncertainty is discussed when test results are correlated. Specification limits of the components' content of such a material generate a multivariate specification interval/domain. When true values of components' content and corresponding test results are modelled by multivariate distributions (e.g. by multivariate normal distributions), a total global risk of a false decision on the material conformity can be evaluated based on calculation of integrals of their joint probability density function. No transformation of the raw data is required for that. A total specific risk can be evaluated as the joint posterior cumulative function of true values of a specific batch or lot lying outside the multivariate specification domain, when the vector of test results, obtained for the lot, is inside this domain. It was shown, using a case study of four components under control in a drug, that the correlation influence on the risk value is not easily predictable. To assess this influence, the evaluated total risk values were compared with those calculated for independent test results and also with those assuming much stronger correlation than that observed. While the observed statistically significant correlation did not lead to a visible difference in the total risk values in comparison to the independent test results, the stronger correlation among the variables caused either the total risk decreasing or its increasing, depending on the actual values of the test results. Copyright © 2017 Elsevier B.V. All rights reserved.
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Continuous-time random-walk model for financial distributions
NASA Astrophysics Data System (ADS)
Masoliver, Jaume; Montero, Miquel; Weiss, George H.
2003-02-01
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.
ERIC Educational Resources Information Center
Storkel, Holly L.; Lee, Su-Yeon
2011-01-01
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…
Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara
2013-01-01
Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…
Nuclear risk analysis of the Ulysses mission
NASA Astrophysics Data System (ADS)
Bartram, Bart W.; Vaughan, Frank R.; Englehart, Richard W., Dr.
1991-01-01
The use of a radioisotope thermoelectric generator fueled with plutonium-238 dioxide on the Space Shuttle-launched Ulysses mission implies some level of risk due to potential accidents. This paper describes the method used to quantify risks in the Ulysses mission Final Safety Analysis Report prepared for the U.S. Department of Energy. The starting point for the analysis described herein is following input of source term probability distributions from the General Electric Company. A Monte Carlo technique is used to develop probability distributions of radiological consequences for a range of accident scenarios thoughout the mission. Factors affecting radiological consequences are identified, the probability distribution of the effect of each factor determined, and the functional relationship among all the factors established. The probability distributions of all the factor effects are then combined using a Monte Carlo technique. The results of the analysis are presented in terms of complementary cumulative distribution functions (CCDF) by mission sub-phase, phase, and the overall mission. The CCDFs show the total probability that consequences (calculated health effects) would be equal to or greater than a given value.
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
Effects of livestock species and stocking density on accretion rates in grazed salt marshes
NASA Astrophysics Data System (ADS)
Nolte, Stefanie; Esselink, Peter; Bakker, Jan P.; Smit, Christian
2015-01-01
Coastal ecosystems, such as salt marshes, are threatened by accelerated sea-level rise (SLR). Salt marshes deliver valuable ecosystem services such as coastal protection and the provision of habitat for a unique flora and fauna. Whether salt marshes in the Wadden Sea area are able to survive accelerated SLR depends on sufficient deposition of sediments which add to vertical marsh accretion. Accretion rate is influenced by a number of factors, and livestock grazing was recently included. Livestock grazing is assumed to reduce accretion rates in two ways: (a) directly by increasing soil compaction through trampling, and (b) indirectly by affecting the vegetation structure, which may lower the sediment deposition. For four years, we studied the impact of two livestock species (horse and cattle) at two stocking densities (0.5 and 1.0 animal ha-1) on accretion in a large-scale grazing experiment using sedimentation plates. We found lower cumulative accretion rates in high stocking densities, probably because more animals cause more compaction and create a lower canopy. Furthermore, a trend towards lower accretion rates in horse-compared to cattle-grazed treatments was found, most likely because (1) horses are more active and thus cause more compaction, and (2) herbage intake by horses is higher than by cattle, which causes a higher biomass removal and shorter canopy. During summer periods, negative accretion rates were found. When the grazing and non-grazing seasons were separated, the impact of grazing differed among years. In summer, we only found an effect of different treatments if soil moisture (precipitation) was relatively low. In winter, a sufficiently high inundation frequency was necessary to create differences between grazing treatments. We conclude that stocking densities, and to a certain extent also livestock species, affect accretion rates in salt marshes. Both stocking densities and livestock species should thus be taken into account in management decisions of salt marshes. In our study accretion rates were higher than the current SLR. Further research is needed to include grazing effects into sedimentation models, given the importance of grazing management in the Wadden Sea area.
Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.
NASA Astrophysics Data System (ADS)
Piotrowska, M. J.; Bodnar, M.
2018-01-01
We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Competition between harvester ants and rodents in the cold desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.
1979-09-30
Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr
In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less
Krimmel, R.M.
1999-01-01
Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.
NASA Astrophysics Data System (ADS)
Justman, D.; Rose, K.; Bauer, J. R.; Miller, R., III; Vasylkivska, V.; Romeo, L.
2016-12-01
ArcGIS Online story maps allows users to communicate complex topics with geospatially enabled stories. This story map web application entitled "Evaluating the Mysteries of Seismicity in Oklahoma" has been employed as part of a broader research effort investigating the relationships between spatiotemporal systems and seismicity to understand the recent increase in seismicity by reviewing literature, exploring, and performing analyses on key datasets. It offers information about the unprecedented increase in seismic events since 2008, earthquake history, the risk to the population, physical mechanisms behind earthquakes, natural and anthropogenic earthquake factors, and individual & cumulative spatial extents of these factors. The cumulative spatial extents for natural, anthropogenic, and all combined earthquake factors were determined using the Cumulative Spatial Impact Layers (CSILs) tool developed at the National Energy Technology Laboratory (NETL). Results show positive correlations between the average number of influences (datasets related to individual factors) and the number of earthquakes for every 100 square mile grid cell in Oklahoma, along with interesting spatial correlations for the individual & cumulative spatial extents of these factors when overlaid with earthquake density and a hotspot analysis for earthquake magnitude from 2010 to 2015.
Bakas, Panagiotis; Boutas, Ioannis; Creatsa, Maria; Vlahos, Nicos; Gregoriou, Odysseas; Creatsas, George; Hassiakos, Dimitrios
2015-10-01
To assess whether the levels of anti-Mullerian hormone (AMH) are related to outcome of intrauterine insemination (IUI) in patients treated with gonadotropins. A total of 195 patients underwent controlled ovarian stimulation (COS) with recombinant follicle stimulating hormone (rFSH) (50-150 IU/d). All patients were submitted upto three cycles of IUI. Primary outcome was the ability of AMH levels to predict clinical pregnancy at first attempt and the cumulative clinical pregnancy probability of upto three IUI cycles. Secondary outcomes were the relation of AMH, LH, FSH, BMI, age, parity and basic estradiol levels with each other and the outcome of IUI. The area under the receiver operating characteristic (ROC) curve in predicting clinical pregnancy for AMH at first attempt was 0.53 and for cumulative clinical pregnancy was 0.76. AMH levels were positively correlated with clinical pregnancy rate at first attempt and with cumulative clinical pregnancy rate, but negatively correlated with patient's age and FSH levels. Patient's FSH, LH levels were negatively correlated with cumulative clinical pregnancy rate. AMH levels seem to have a positive correlation and patient's age and LH levels had a negative correlation with the outcome of IUI and COS with gonadotropins. AMH concentration was significantly higher and LH was significantly lower in patients with a clinical pregnancy after three cycles of IUI treatment compared with those who did not achieve pregnancy.
Yuan, Juxiang; Han, Bing; Cui, Kai; Ding, Yu; Fan, Xueyun; Cao, Hong; Yao, Sanqiao; Suo, Xia; Sun, Zhiqian; Yun, Xiang; Hua, Zhengbing; Chen, Jie
2015-01-01
We aimed to estimate the economic losses currently caused by coal workers’ pneumoconiosis (CWP) and, on the basis of these measurements, confirm the economic benefit of preventive measures. Our cohort study included 1,847 patients with CWP and 43,742 coal workers without CWP who were registered in the employment records of the Datong Coal Mine Group. We calculated the cumulative incidence rate of pneumoconiosis using the life-table method. We used the dose-response relationship between cumulative incidence density and cumulative dust exposure to predict the future trend in the incidence of CWP. We calculate the economic loss caused by CWP and economic effectiveness of CWP prevention by a step-wise model. The cumulative incidence rates of CWP in the tunneling, mining, combining, and helping cohorts were 58.7%, 28.1%, 21.7%, and 4.0%, respectively. The cumulative incidence rates increased gradually with increasing cumulative dust exposure (CDE). We predicted 4,300 new CWP cases, assuming the dust concentrations remained at the levels of 2011. If advanced dustproof equipment was adopted, 537 fewer people would be diagnosed with CWP. In all, losses of 1.207 billion Renminbi (RMB, official currency of China) would be prevented and 4,698.8 healthy life years would be gained. Investments in advanced dustproof equipment would be total 843 million RMB, according to our study; the ratio of investment to restored economic losses was 1:1.43. Controlling workplace dust concentrations is critical to reduce the onset of pneumoconiosis and to achieve economic benefits. PMID:26098706
Evidence-based evaluation of the cumulative effects of ecosystem restoration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diefenderfer, Heida L.; Johnson, Gary E.; Thom, Ronald M.
Evaluating the cumulative effects of large-scale ecological restoration programs is necessary to inform adaptive ecosystem management and provide society with resilient and sustainable services. However, complex linkages between restorative actions and ecosystem responses make evaluations problematic. Despite long-term federal investments in restoring aquatic ecosystems, no standard evaluation method has been adopted and most programs focus on monitoring and analysis, not synthesis and evaluation. In this paper, we demonstrate a new transdisciplinary approach integrating techniques from evidence-based medicine, critical thinking, and cumulative effects assessment. Tiered hypotheses are identified using an ecosystem conceptual model. The systematic literature review at the core ofmore » evidence-based assessment becomes one of many lines of evidence assessed collectively, using critical thinking strategies and causal criteria from a cumulative effects perspective. As a demonstration, we analyzed data from 166 locations on the Columbia River and estuary representing 12 indicators of habitat and fish response to floodplain restoration actions intended to benefit threatened and endangered salmon. Synthesis of seven lines of evidence showed that hydrologic reconnection promoted macrodetritis export, prey availability, and fish access and feeding. The evidence was sufficient to infer cross-boundary, indirect, compounding and delayed cumulative effects, and suggestive of nonlinear, landscape-scale, and spatial density effects. On the basis of causal inferences regarding food web functions, we concluded that the restoration program has a cumulative beneficial effect on juvenile salmon. As a result, this evidence-based approach will enable the evaluation of restoration in complex coastal and riverine ecosystems where data have accumulated without sufficient synthesis.« less
Evidence-based evaluation of the cumulative effects of ecosystem restoration
Diefenderfer, Heida L.; Johnson, Gary E.; Thom, Ronald M.; ...
2016-03-18
Evaluating the cumulative effects of large-scale ecological restoration programs is necessary to inform adaptive ecosystem management and provide society with resilient and sustainable services. However, complex linkages between restorative actions and ecosystem responses make evaluations problematic. Despite long-term federal investments in restoring aquatic ecosystems, no standard evaluation method has been adopted and most programs focus on monitoring and analysis, not synthesis and evaluation. In this paper, we demonstrate a new transdisciplinary approach integrating techniques from evidence-based medicine, critical thinking, and cumulative effects assessment. Tiered hypotheses are identified using an ecosystem conceptual model. The systematic literature review at the core ofmore » evidence-based assessment becomes one of many lines of evidence assessed collectively, using critical thinking strategies and causal criteria from a cumulative effects perspective. As a demonstration, we analyzed data from 166 locations on the Columbia River and estuary representing 12 indicators of habitat and fish response to floodplain restoration actions intended to benefit threatened and endangered salmon. Synthesis of seven lines of evidence showed that hydrologic reconnection promoted macrodetritis export, prey availability, and fish access and feeding. The evidence was sufficient to infer cross-boundary, indirect, compounding and delayed cumulative effects, and suggestive of nonlinear, landscape-scale, and spatial density effects. On the basis of causal inferences regarding food web functions, we concluded that the restoration program has a cumulative beneficial effect on juvenile salmon. As a result, this evidence-based approach will enable the evaluation of restoration in complex coastal and riverine ecosystems where data have accumulated without sufficient synthesis.« less
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
40 CFR 60.433 - Performance test and compliance provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... indicating the cumulative liquid volumes used at each affected facility; or (ii) Segregated storage tanks for... related coatings measured as used by volume with different amounts of VOC content or different densities. n is the total number of raw inks and related coatings measured as used by volume with different...
Redundancy and reduction: Speakers manage syntactic information density
Florian Jaeger, T.
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Age-dependent associations between androgenetic alopecia and prostate cancer risk.
Muller, David C; Giles, Graham G; Sinclair, Rod; Hopper, John L; English, Dallas R; Severi, Gianluca
2013-02-01
Both prostate cancer and androgenetic alopecia are strongly age-related conditions that are considered to be androgen dependent, but studies of the relationship between them have yielded inconsistent results. We aimed to assess whether androgenetic alopecia at ages 20 and 40 years are associated with risk of prostate cancer. At a follow-up of the Melbourne Collaborative Cohort Study, men were asked to assess their hair pattern at ages 20 and 40 years relative to eight categories in showcards. Cases were men notified to the Victorian Cancer Registry with prostate cancer diagnosed between cohort enrollment (1990-1994) and follow-up attendance (2003-2009). Flexible parametric survival models were used to estimate age-varying HRs and predicted cumulative probabilities of prostate cancer by androgenetic alopecia categories. Of 9,448 men that attended follow-up and provided data on androgenetic alopecia, we identified 476 prostate cancer cases during a median follow-up of 11 years four months. Cumulative probability of prostate cancer was greater at all ages up to 76 years, for men with vertex versus no androgenetic alopecia at age of 40 years. At age of 76 years, the estimated probabilities converged to 0.15. Vertex androgenetic alopecia at 40 years was also associated with younger age of diagnosis for prostate cancer cases. Vertex androgenetic alopecia at age of 40 years might be a marker of increased risk of early-onset prostate cancer. If confirmed, these results suggest that the apparently conflicting findings of previous studies might be explained by failure to adequately model the age-varying nature of the association between androgenetic alopecia and prostate cancer.
Ensrud, Kristine E; Taylor, Brent C; Peters, Katherine W; Gourlay, Margaret L; Donaldson, Meghan G; Leslie, William D; Blackwell, Terri L; Fink, Howard A; Orwoll, Eric S; Schousboe, John
2014-07-03
To quantify incremental effects of applying different criteria to identify men who are candidates for drug treatment to prevent fracture and to examine the extent to which fracture probabilities vary across distinct categories of men defined by these criteria. Cross sectional and longitudinal analysis of a prospective cohort study. Multicenter Osteoporotic Fractures in Men (MrOS) study in the United States. 5880 untreated community dwelling men aged 65 years or over classified into four distinct groups: osteoporosis by World Health Organization criteria alone; osteoporosis by National Osteoporosis Foundation (NOF) but not WHO criteria; no osteoporosis but at high fracture risk (at or above NOF derived FRAX intervention thresholds recommended for US); and no osteoporosis and at low fracture risk (below NOF derived FRAX intervention thresholds recommended for US). Proportion of men identified for drug treatment; predicted 10 year probabilities of hip and major osteoporotic fracture calculated using FRAX algorithm with femoral neck bone mineral density; observed 10 year probabilities for confirmed incident hip and major osteoporotic (hip, clinical vertebral, wrist, or humerus) fracture events calculated using cumulative incidence estimation, accounting for competing risk of mortality. 130 (2.2%) men were identified as having osteoporosis by using the WHO definition, and an additional 422 were identified by applying the NOF definition (total osteoporosis prevalence 9.4%). Application of NOF derived FRAX intervention thresholds led to 936 (15.9%) additional men without osteoporosis being identified as at high fracture risk, raising the total prevalence of men potentially eligible for drug treatment to 25.3%. Observed 10 year hip fracture probabilities were 20.6% for men with osteoporosis by WHO criteria alone, 6.8% for men with osteoporosis by NOF (but not WHO) criteria, 6.4% for men without osteoporosis but classified as at high fracture risk, and 1.5% for men without osteoporosis and classified as at low fracture risk. A similar pattern was noted in observed fracture probabilities for major osteoporotic fracture. Among men with osteoporosis by WHO criteria, observed fracture probabilities were greater than FRAX predicted probabilities (20.6% v 9.5% for hip fracture and 30.0% v 17.4% for major osteoporotic fracture). Choice of definition of osteoporosis and use of NOF derived FRAX intervention thresholds have major effects on the proportion of older men identified as warranting drug treatment to prevent fracture. Among men identified with osteoporosis by WHO criteria, who comprised 2% of the study population, actual observed fracture probabilities during 10 years of follow-up were highest and exceeded FRAX predicted fracture probabilities. On the basis of findings from randomized trials in women, these men are most likely to benefit from treatment. Expanding indications for treatment beyond this small group has uncertain value owing to lower observed fracture probabilities and uncertain benefits of treatment among men not selected on the basis of WHO criteria. © Ensrud et al 2014.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
Karin L. Riley; Rachel A. Loehman
2016-01-01
Climate changes are expected to increase fire frequency, fire season length, and cumulative area burned in the western United States. We focus on the potential impact of mid-21st- century climate changes on annual burn probability, fire season length, and large fire characteristics including number and size for a study area in the Northern Rocky Mountains....
Geronazzo-Alman, Lupo; Eisenberg, Ruth; Shen, Sa; Duarte, Cristiane S; Musa, George J; Wicks, Judith; Fan, Bin; Doan, Thao; Guffanti, Guia; Bresnahan, Michaeline; Hoven, Christina W
2017-04-01
Cumulative exposure to work-related traumatic events (CE) is a foreseeable risk for psychiatric disorders in first responders (FRs). Our objective was to examine the impact of work-related CE that could serve as predictor of posttraumatic stress disorder (PTSD) and/or depression in FRs. Cross-sectional examination of previous CE and past-month PTSD outcomes and depression in 209 FRs. Logistic (probable PTSD; probable depression) and Poisson regressions (PTSD score) of the outcomes on work-related CE indexes, adjusting for demographic variables. Differences across occupational groups were also examined. Receiver operating characteristic analysis determined the sensitivity and specificity of CE indexes. All indexes were significantly and differently associated with PTSD; associations with depression were non-significant. The index capturing the sheer number of different incidents experienced regardless of frequency ('Variety') showed conceptual, practical and statistical advantages compared to other indexes. In general, the indexes showed poor to fair discrimination accuracy. Work-related CE is specifically associated with PTSD. Focusing on the variety of exposures may be a simple and effective strategy to predict PTSD in FRs. Further research on sensitivity and specificity of exposure indexes, preferably examined prospectively, is needed and could lead to early identification of individuals at risk. Copyright © 2016 Elsevier Inc. All rights reserved.
Prognostic Factors in Severe Chagasic Heart Failure
Costa, Sandra de Araújo; Rassi, Salvador; Freitas, Elis Marra da Madeira; Gutierrez, Natália da Silva; Boaventura, Fabiana Miranda; Sampaio, Larissa Pereira da Costa; Silva, João Bastista Masson
2017-01-01
Background Prognostic factors are extensively studied in heart failure; however, their role in severe Chagasic heart failure have not been established. Objectives To identify the association of clinical and laboratory factors with the prognosis of severe Chagasic heart failure, as well as the association of these factors with mortality and survival in a 7.5-year follow-up. Methods 60 patients with severe Chagasic heart failure were evaluated regarding the following variables: age, blood pressure, ejection fraction, serum sodium, creatinine, 6-minute walk test, non-sustained ventricular tachycardia, QRS width, indexed left atrial volume, and functional class. Results 53 (88.3%) patients died during follow-up, and 7 (11.7%) remained alive. Cumulative overall survival probability was approximately 11%. Non-sustained ventricular tachycardia (HR = 2.11; 95% CI: 1.04 - 4.31; p<0.05) and indexed left atrial volume ≥ 72 mL/m2 (HR = 3.51; 95% CI: 1.63 - 7.52; p<0.05) were the only variables that remained as independent predictors of mortality. Conclusions The presence of non-sustained ventricular tachycardia on Holter and indexed left atrial volume > 72 mL/m2 are independent predictors of mortality in severe Chagasic heart failure, with cumulative survival probability of only 11% in 7.5 years. PMID:28443956
Liu, Yanfang; Liao, Huidan; Liu, Ying; Guo, Juanjuan; Sun, Yi; Fu, Xiaoliang; Xiao, Ding; Cai, Jifeng; Lan, Lingmei; Xie, Pingli; Zha, Lagabaiyila
2017-04-01
Nonbinary single-nucleotide polymorphisms (SNPs) are potential forensic genetic markers because their discrimination power is greater than that of normal binary SNPs, and that they can detect highly degraded samples. We previously developed a nonbinary SNP multiplex typing assay. In this study, we selected additional 20 nonbinary SNPs from the NCBI SNP database and verified them through pyrosequencing. These 20 nonbinary SNPs were analyzed using the fluorescent-labeled SNaPshot multiplex SNP typing method. The allele frequencies and genetic parameters of these 20 nonbinary SNPs were determined among 314 unrelated individuals from Han populations from China. The total power of discrimination was 0.9999999999994, and the cumulative probability of exclusion was 0.9986. Moreover, the result of the combination of this 20 nonbinary SNP assay with the 20 nonbinary SNP assay we previously developed demonstrated that the cumulative probability of exclusion of the 40 nonbinary SNPs was 0.999991 and that no significant linkage disequilibrium was observed in all 40 nonbinary SNPs. Thus, we concluded that this new system consisting of new 20 nonbinary SNPs could provide highly informative polymorphic data which would be further used in forensic application and would serve as a potentially valuable supplement to forensic DNA analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nanosecond multiple pulse measurements and the different types of defects
NASA Astrophysics Data System (ADS)
Wagner, Frank R.; Natoli, Jean-Yves; Beaudier, Alexandre; Commandré, Mireille
2017-11-01
Laser damage measurements with multiple pulses at constant fluence (S-on-1 measurements) are of high practical importance for design and validation of high power photonic instruments. Using nanosecond lasers, it has been recognized long ago that single pulse laser damage is linked to fabrication related defects. Models describing the laser damage probability as the probability of encounter between the high fluence region of the laser beam and the fabrication related defects are thus widely used to analyze the measurements. Nanosecond S-on-1 tests often reveal the "fatigue effect", i.e. a decrease of the laser damage threshold with increasing pulse number. Most authors attribute this effect to cumulative material modifications operated by the first pulses. In this paper we discuss the different situations that are observed upon nanosecond S-on-1 measurements of several different materials using different wavelengths and speak in particular about the defects involved in the laser damage mechanism. These defects may be fabrication-related or laser-induced, stable or evolutive, cumulative or of short lifetime. We will show that the type of defect that is dominating an S-on-1 experiment depends on the wavelength and the material under test and give examples from measurements of nonlinear optical crystals, fused silica and oxide mixture coatings.
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
NASA Astrophysics Data System (ADS)
Li, Chen; Niu, Cuijuan
2015-03-01
Sexual reproduction adversely affects the population growth of cyclic parthenogenetic animals. The density-dependent sexual reproduction of a superior competitor could mediate the coexistence. However, the cost of sex may make the inferior competitor more vulnerable. To investigate the effect of sexual reproduction on the inferior competitor, we experimentally paired the competition of one Brachionus angularis clone against three Brachionus calyciflorus clones. One of the B. calyciflorus clones showed a low propensity for sexual reproduction, while the other two showed high propensities. The results show that all B. calyciflorus clones were excluded in the competition for resources at low food level. The increased food level promoted the competition persistence, but the clones did not show a clear pattern. Both the cumulative population density and resting egg production increased with the food level. The cumulative population density decreased with the mixis investment, while the resting egg production increased with the mixis investment. A trade-off between the population growth and sexual reproduction was observed in this research. The results indicate that although higher mixis investment resulted in a lower population density, it would not determinately accelerate the exclusion process of the inferior competitor. On the contrary, higher mixis investment promoted resting egg production before being excluded and thus promised a long-term benefit. In conclusion, our results suggest that mixis investment, to some extent, favored the excluded inferior competitor under fierce competition or some other adverse conditions.
Symmetries, invariants and generating functions: higher-order statistics of biased tracers
NASA Astrophysics Data System (ADS)
Munshi, Dipak
2018-01-01
Gravitationally collapsed objects are known to be biased tracers of an underlying density contrast. Using symmetry arguments, generalised biasing schemes have recently been developed to relate the halo density contrast δh with the underlying density contrast δ, divergence of velocity θ and their higher-order derivatives. This is done by constructing invariants such as s, t, ψ,η. We show how the generating function formalism in Eulerian standard perturbation theory (SPT) can be used to show that many of the additional terms based on extended Galilean and Lifshitz symmetry actually do not make any contribution to the higher-order statistics of biased tracers. Other terms can also be drastically simplified allowing us to write the vertices associated with δh in terms of the vertices of δ and θ, the higher-order derivatives and the bias coefficients. We also compute the cumulant correlators (CCs) for two different tracer populations. These perturbative results are valid for tree-level contributions but at an arbitrary order. We also take into account the stochastic nature bias in our analysis. Extending previous results of a local polynomial model of bias, we express the one-point cumulants Script SN and their two-point counterparts, the CCs i.e. Script Cpq, of biased tracers in terms of that of their underlying density contrast counterparts. As a by-product of our calculation we also discuss the results using approximations based on Lagrangian perturbation theory (LPT).
Schulz, Amy J.; Mentz, Graciela; Lachance, Laurie; Zenk, Shannon N.; Johnson, Jonetta; Stokes, Carmen; Mandell, Rebecca
2013-01-01
Objective To examine contributions of observed and perceived neighborhood characteristics in explaining associations between neighborhood poverty and cumulative biological risk (CBR) in an urban community. Methods Multilevel regression analyses were conducted using cross-sectional data from a probability sample survey (n=919), and observational and census data. Dependent variable: CBR. Independent variables: Neighborhood disorder, deterioration and characteristics; perceived neighborhood social environment, physical environment, and neighborhood environment. Covariates: Neighborhood and individual demographics, health-related behaviors. Results Observed and perceived indicators of neighborhood conditions were significantly associated with CBR, after accounting for both neighborhood and individual level socioeconomic indicators. Observed and perceived neighborhood environmental conditions mediated associations between neighborhood poverty and CBR. Conclusions Findings were consistent with the hypothesis that neighborhood conditions associated with economic divestment mediate associations between neighborhood poverty and CBR. PMID:24100238
Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.
Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M
2002-10-01
The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Generalized Arcsine Laws for Fractional Brownian Motion
NASA Astrophysics Data System (ADS)
Sadhu, Tridib; Delorme, Mathieu; Wiese, Kay Jörg
2018-01-01
The three arcsine laws for Brownian motion are a cornerstone of extreme-value statistics. For a Brownian Bt starting from the origin, and evolving during time T , one considers the following three observables: (i) the duration t+ the process is positive, (ii) the time tlast the process last visits the origin, and (iii) the time tmax when it achieves its maximum (or minimum). All three observables have the same cumulative probability distribution expressed as an arcsine function, thus the name arcsine laws. We show how these laws change for fractional Brownian motion Xt, a non-Markovian Gaussian process indexed by the Hurst exponent H . It generalizes standard Brownian motion (i.e., H =1/2 ). We obtain the three probabilities using a perturbative expansion in ɛ =H -1/2 . While all three probabilities are different, this distinction can only be made at second order in ɛ . Our results are confirmed to high precision by extensive numerical simulations.
Early lunar petrogenesis, oceanic and extraoceanic
NASA Technical Reports Server (NTRS)
Warren, P. H.; Wasson, J. T.
1980-01-01
An attempt is made to ascertain which (if any) pristine nonmare rocks, other than KREEPy ones, are not cumulates from the magma ocean. It is noted that the only pristine rocks having bulk densities low enough to have formed by floating above the magma ocean are the ferroan anorthosites, which are easily recognizable as a discrete subset of pristine rocks in general, on the basis of mineral composition relationships. The other class of pristine nonmare rocks, the Mg-rich rocks, did not form from the same magma that produced the ferroan anorthosites. It is suggested that they were formed in layered noritic-troctolitic plutons. These plutons, it is noted, were apparently intruded at, or slightly above, the boundary between the floated ferroan anorthosite crust and the underlying complementary mafic cumulates. It is thought that the parental magmas of the plutons may have arisen by partial melting of either deep mafic cumulates from the magma ocean or a still deeper, undifferentiated primordial layer that was not molten during the magma ocean period.
Cameron, R.D.; Smith, W.T.; White, R.G.; Griffith, B.
2005-01-01
We synthesize findings from cooperative research on effects of petroleum development on caribou (Rangifer tarandus granti) of the Central Arctic Herd (CAH). The CAH increased from about 6000 animals in 1978 to 23 000 in 1992, declined to 18 000 by 1995, and again increased to 27 000 by 2000. Net calf production was consistent with changes in herd size. In the Kuparuk Development Area (KDA), west of Prudhoe Bay, abundance of calving caribou was less than expected within 4 km of roads and declined exponentially with road density. With increasing infrastructure, high-density calving shifted from the KDA to inland areas with lower forage biomass. During July and early August, caribou were relatively unsuccessful in crossing road/pipeline corridors in the KDA, particularly when in large, insect-harassed aggregations; and both abundance and movements of females were lower in the oil field complex at Prudhoe Bay than in other areas along the Arctic coast. Female caribou exposed to petroleum development west of the Sagavanirktok River may have consumed less forage during the calving period and experienced lower energy balance during the midsummer insect season than those under disturbance-free conditions east of the river. The probable consequences were poorer body condition at breeding and lower parturition rates for western females than for eastern females (e.g., 1988-94: 64% vs. 83% parturient, respectively; p = 0.003), which depressed the productivity of the herd. Assessments of cumulative effects of petroleum development on caribou must incorporate the complex interactions with a variable natural environment. ?? The Arctic Institute of North America.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Car accidents induced by a bottleneck
NASA Astrophysics Data System (ADS)
Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid
2017-12-01
Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.
Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination
Sinkkonen, Aki
2005-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163
Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination
Sinkkonen, Aki
2006-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Myers, Samuel M.; Modine, Normand A.
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Testing hypotheses of earthquake occurrence
NASA Astrophysics Data System (ADS)
Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.
2003-12-01
We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we will estimate by simulations. Each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results would be archived and posted on the RELM web site. The same methods can be applied to any region with adequate monitoring and sufficient earthquakes. If fewer than ten events are forecasted, the likelihood tests may not give definitive results. The tests do force certain requirements on the forecast models. Because the tests are based on absolute rates, stress models must be explicit about how stress increments affect past seismicity rates. Aftershocks of triggered events must be accounted for. Furthermore, the tests are sensitive to magnitude, so forecast models must specify the magnitude distribution of triggered events. Models should account for probable errors in magnitude and location by appropriate smoothing of the probabilities, as the tests will be "cold hearted:" near misses won't count.
Vasilakis, Dimitris P; Whitfield, D Philip; Kati, Vassiliki
2017-01-01
Wind farm development can combat climate change but may also threaten bird populations' persistence through collision with wind turbine blades if such development is improperly planned strategically and cumulatively. Such improper planning may often occur. Numerous wind farms are planned in a region hosting the only cinereous vulture population in south-eastern Europe. We combined range use modelling and a Collision Risk Model (CRM) to predict the cumulative collision mortality for cinereous vulture under all operating and proposed wind farms. Four different vulture avoidance rates were considered in the CRM. Cumulative collision mortality was expected to be eight to ten times greater in the future (proposed and operating wind farms) than currently (operating wind farms), equivalent to 44% of the current population (103 individuals) if all proposals are authorized (2744 MW). Even under the most optimistic scenario whereby authorized proposals will not collectively exceed the national target for wind harnessing in the study area (960 MW), cumulative collision mortality would still be high (17% of current population) and likely lead to population extinction. Under any wind farm proposal scenario, over 92% of expected deaths would occur in the core area of the population, further implying inadequate spatial planning and implementation of relevant European legislation with scant regard for governmental obligations to protect key species. On the basis of a sensitivity map we derive a spatially explicit solution that could meet the national target of wind harnessing with a minimum conservation cost of less than 1% population loss providing that the population mortality (5.2%) caused by the operating wind farms in the core area would be totally mitigated. Under other scenarios, the vulture population would probably be at serious risk of extinction. Our 'win-win' approach is appropriate to other potential conflicts where wind farms may cumulatively threaten wildlife populations.
Whitfield, D. Philip; Kati, Vassiliki
2017-01-01
Wind farm development can combat climate change but may also threaten bird populations’ persistence through collision with wind turbine blades if such development is improperly planned strategically and cumulatively. Such improper planning may often occur. Numerous wind farms are planned in a region hosting the only cinereous vulture population in south-eastern Europe. We combined range use modelling and a Collision Risk Model (CRM) to predict the cumulative collision mortality for cinereous vulture under all operating and proposed wind farms. Four different vulture avoidance rates were considered in the CRM. Cumulative collision mortality was expected to be eight to ten times greater in the future (proposed and operating wind farms) than currently (operating wind farms), equivalent to 44% of the current population (103 individuals) if all proposals are authorized (2744 MW). Even under the most optimistic scenario whereby authorized proposals will not collectively exceed the national target for wind harnessing in the study area (960 MW), cumulative collision mortality would still be high (17% of current population) and likely lead to population extinction. Under any wind farm proposal scenario, over 92% of expected deaths would occur in the core area of the population, further implying inadequate spatial planning and implementation of relevant European legislation with scant regard for governmental obligations to protect key species. On the basis of a sensitivity map we derive a spatially explicit solution that could meet the national target of wind harnessing with a minimum conservation cost of less than 1% population loss providing that the population mortality (5.2%) caused by the operating wind farms in the core area would be totally mitigated. Under other scenarios, the vulture population would probably be at serious risk of extinction. Our ‘win-win’ approach is appropriate to other potential conflicts where wind farms may cumulatively threaten wildlife populations. PMID:28231316
ERIC Educational Resources Information Center
Rispens, Judith; Baker, Anne; Duinmeijer, Iris
2015-01-01
Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…
Long-Term Trends in Glaucoma-Related Blindness in Olmsted County, Minnesota
Malihi, Mehrdad; Moura Filho, Edney R.; Hodge, David O.; Sit, Arthur J.
2013-01-01
Objective To determine the longitudinal trends in the probability of blindness due to open-angle glaucoma (OAG) in Olmsted County, Minnesota from 1965 to 2009. Design Retrospective, population-based cohort study. Participants All residents of Olmsted County, Minnesota (40 years of age and over) who were diagnosed with OAG between January 1, 1965 to December 31, 2000. Methods All available medical records of every incident case of OAG were reviewed until December 31, 2009 to identify progression to blindness, defined as visual acuity of 20/200 or worse, and/or visual field constriction to 20° or less. Kaplan–Meier analysis was used to estimate the cumulative probability of glaucoma-related blindness. Population incidence of blindness within 10 years of diagnosis was calculated using United States Census data. Rates for subjects diagnosed in the period 1965–1980 were compared with rates for subjects diagnosed in the period 1981–2000 using logrank tests and Poisson regression models. Main Outcome Measures Cumulative probability of OAG-related blindness, and population incidence of blindness within 10 years of diagnosis. Results Probability of glaucoma-related blindness in at least one eye at 20 years decreased from 25.8 % (95% Confidence interval [CI]: 18.5–32.5) for subjects diagnosed in 1965–1980, to 13.5% (95% CI: 8.8–17.9) for subjects diagnosed in 1981–2000 (P=0.01). The population incidence of blindness within 10 years of the diagnosis decreased from 8.7 per 100,000 (95% CI: 5.9–11.5) for subjects diagnosed in 1965–1980, to 5.5 per 100,000 (95% CI: 3.9–7.2) for subjects diagnosed in 1981–2000 (P=0.02). Higher age at diagnosis was associated with increased risk of progression to blindness (P< 0.001). Conclusions The 20-year probability and the population incidence of blindness due to OAG in at least one eye have decreased over a 45 year period from 1965 to 2009. However, a significant proportion of patients still progress to blindness despite recent diagnostic and therapeutic advancements. PMID:24823760
Blonde, Lawrence; Meneghini, Luigi; Peng, Xuejun Victor; Boss, Anders; Rhee, Kyu; Shaunik, Alka; Kumar, Supriya; Balodi, Sidhartha; Brulle-Wohlhueter, Claire; McCrimmon, Rory J
2018-06-01
Basal insulin (BI) plays an important role in treating type 2 diabetes (T2D), especially when oral antidiabetic (OAD) medications are insufficient for glycemic control. We conducted a retrospective, observational study using electronic medical records (EMR) data from the IBM ® Explorys database to evaluate the probability of achieving glycemic control over 24 months after BI initiation in patients with T2D in the USA. A cohort of 6597 patients with T2D who started BI following OAD(s) and had at least one valid glycated hemoglobin (HbA1c) result recorded both within 90 days before and 720 days after BI initiation were selected. We estimated the changes from baseline in HbA1c every 6 months, the quarterly conditional probabilities of reaching HbA1c < 7% if a patient had not achieved glycemic control prior to each quarter (Q), and the cumulative probability of reaching glycemic control over 24 months. Our cohort was representative of patients with T2D who initiated BI from OADs in the USA. The average HbA1c was 9.1% at BI initiation, and decreased robustly (1.5%) in the first 6 months after initiation with no further reductions thereafter. The conditional probability of reaching glycemic control decreased rapidly in the first year (26.6% in Q2; 17.6% in Q3; 8.6% in Q4), and then remained low (≤ 6.1%) for each quarter in the second year. Cumulatively, about 38% of patients reached HbA1c < 7% in the first year; only approximately 8% more did so in the second year. Our study of real-world data from a large US EMR database suggested that among patients with T2D who initiated BI after OADs, the likelihood of reaching glycemic control diminished over time, and remained low from 12 months onwards. Additional treatment options should be considered if patients do not reach glycemic control within 12 months of BI initiation. Sanofi Corporation.
Codes, Liana; de Souza, Ygor Gomes; D'Oliveira, Ricardo Azevedo Cruz; Bastos, Jorge Luiz Andrade; Bittencourt, Paulo Lisboa
2018-04-24
To analyze whether fluid overload is an independent risk factor of adverse outcomes after liver transplantation (LT). One hundred and twenty-one patients submitted to LT were retrospectively evaluated. Data regarding perioperative and postoperative variables previously associated with adverse outcomes after LT were reviewed. Cumulative fluid balance (FB) in the first 12 h and 4 d after surgery were compared with major adverse outcomes after LT. Most of the patients were submitted to a liberal approach of fluid administration with a mean cumulative FB over 5 L and 10 L, respectively, in the first 12 h and 4 d after LT. Cumulative FB in 4 d was independently associated with occurrence of both AKI and requirement for renal replacement therapy (RRT) (OR = 2.3; 95%CI: 1.37-3.86, P = 0.02 and OR = 2.89; 95%CI: 1.52-5.49, P = 0.001 respectively). Other variables on multivariate analysis associated with AKI and RRT were, respectively, male sex and Acute Physiology and Chronic Health Disease Classification System (APACHE II) levels and sepsis or septic shock. Mortality was shown to be independently related to AST and APACHE II levels (OR = 2.35; 95%CI: 1.1-5.05, P = 0.02 and 2.63; 95%CI: 1.0-6.87, P = 0.04 respectively), probably reflecting the degree of graft dysfunction and severity of early postoperative course of LT. No effect of FB on mortality after LT was disclosed. Cumulative positive FB over 4 d after LT is independently associated with the development of AKI and the requirement of RRT. Survival was not independently related to FB, but to surrogate markers of graft dysfunction and severity of postoperative course of LT.
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words
Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.
2012-01-01
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774
Fractional Brownian motion with a reflecting wall
NASA Astrophysics Data System (ADS)
Wada, Alexander H. O.; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior
NASA Astrophysics Data System (ADS)
Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah
2018-01-01
The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.
Effects of heterogeneous traffic with speed limit zone on the car accidents
NASA Astrophysics Data System (ADS)
Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.
2016-06-01
Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
Browder, Joan A.; Restrepo, V.R.; Rice, J.K.; Robblee, M.B.; Zein-Eldin, Z.
1999-01-01
Two modeling approaches were used to explore the basis for variation in recruitment of pink shrimp, Farfantepenaeus duorarum, to the Tortugas fishing grounds. Emphasis was on development and juvenile densities on the nursery grounds. An exploratory simulation modeling exercise demonstrated large year-to-year variations in recruitment contributions to the Tortugas rink shrimp fishery may occur on some nursery grounds, and production may differ considerably among nursery grounds within the same year, simply on the basis of differences in temperature and salinity. We used a growth and survival model to simulate cumulative harvests from a July-centered cohort of early-settlement-stage postlarvae from two parts of Florida Bay (western Florida Bay and northcentral Florida Bay), using historic temperature and salinity data from these areas. Very large year-to-year differences in simulated cumulative harvests were found for recruits from Whipray Basin. Year-to-year differences in simulated harvests of recruits from Johnson Key Basin were much smaller. In a complementary activity, generalized linear and additive models and intermittent, historic density records were used to develop an uninterrupted multi-year time series of monthly density estimates for juvenile rink shrimp in the Johnson Key Basin. The developed data series was based on relationships of density with environmental variables. The strongest relationship was with sea-surface temperature. Three other environmental variables (rainfall, water level at Everglades National Park Well P35, and mean wind speed) also contributed significantly to explaining variation in juvenile densities. Results of the simulation model and two of the three statistical models yielded similar interannual patterns for Johnson Key Basin. While it is not possible to say that one result validates the other, the concordance of the annual patterns from the two models is supportive of both approaches.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higginson, Drew P.
Here, we describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event.more » We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10 -3 to 0.3–0.7; the upper limit corresponds to Coulomb logarithm of 20–2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.« less
Cumulative effects of mothers' risk and promotive factors on daughters' disruptive behavior.
van der Molen, Elsa; Hipwell, Alison E; Vermeiren, Robert; Loeber, Rolf
2012-07-01
Little is known about the ways in which the accumulation of maternal factors increases or reduces risk for girls' disruptive behavior during preadolescence. In the current study, maternal risk and promotive factors and the severity of girls' disruptive behavior were assessed annually among girls' ages 7-12 in an urban community sample (N = 2043). Maternal risk and promotive factors were operative at different time points in girls' development. Maternal warmth explained variance in girls' disruptive behavior, even after controlling for maternal risk factors and relevant child and neighborhood factors. In addition, findings supported the cumulative hypothesis that the number of risk factors increased the chance on girls' disruptive behavior disorder (DBD), while the number of promotive factors decreased this probability. Daughters of mothers with a history of Conduct Disorder (CD) were exposed to more risk factors and fewer promotive factors compared to daughters of mothers without prior CD. The identification of malleable maternal factors that can serve as targets for intervention has important implications for intergenerational intervention. Cumulative effects show that the focus of prevention efforts should not be on single factors, but on multiple factors associated with girls' disruptive behavior.
Hershberger, P.K.; Gregg, J.; Pacheco, C.; Winton, J.; Richard, J.; Traxler, G.
2007-01-01
Pacific herring were susceptible to waterborne challenge with viral haemorrhagic septicaemia virus (VHSV) throughout their early life history stages, with significantly greater cumulative mortalities occurring among VHSV-exposed groups of 9-, 44-, 54- and 76-day-old larvae than among respective control groups. Similarly, among 89-day-1-year-old and 1+year old post-metamorphosed juveniles, cumulative mortality was significantly greater in VHSV-challenged groups than in respective control groups. Larval exposure to VHSV conferred partial protection to the survivors after their metamorphosis to juveniles as shown by significantly less cumulative mortalities among juvenile groups that survived a VHS epidemic as larvae than among groups that were previously nai??ve to VHSV. Magnitude of the protection, measured as relative per cent survival, was a direct function of larval age at first exposure and was probably a reflection of gradual developmental onset of immunocompetence. These results indicate the potential for easily overlooked VHS epizootics among wild larvae in regions where the virus is endemic and emphasize the importance of early life history stages of marine fish in influencing the ecological disease processes. ?? 2007 The Authors.
Cumulative Effects of Mothers’ Risk and Promotive Factors on Daughters’ Disruptive Behavior
Hipwell, Alison E.; Vermeiren, Robert; Loeber, Rolf
2012-01-01
Little is known about the ways in which the accumulation of maternal factors increases or reduces risk for girls’ disruptive behavior during preadolescence. In the current study, maternal risk and promotive factors and the severity of girls’ disruptive behavior were assessed annually among girls’ ages 7–12 in an urban community sample (N=2043). Maternal risk and promotive factors were operative at different time points in girls’ development. Maternal warmth explained variance in girls’ disruptive behavior, even after controlling for maternal risk factors and relevant child and neighborhood factors. In addition, findings supported the cumulative hypothesis that the number of risk factors increased the chance on girls’ disruptive behavior disorder (DBD), while the number of promotive factors decreased this probability. Daughters of mothers with a history of Conduct Disorder (CD) were exposed to more risk factors and fewer promotive factors compared to daughters of mothers without prior CD. The identification of malleable maternal factors that can serve as targets for intervention has important implications for intergenerational intervention. Cumulative effects show that the focus of prevention efforts should not be on single factors, but on multiple factors associated with girls’ disruptive behavior. PMID:22127641
Hunter-Gatherer Inter-Band Interaction Rates: Implications for Cumulative Culture
Hill, Kim R.; Wood, Brian M.; Baggio, Jacopo; Hurtado, A. Magdalena; Boyd, Robert T.
2014-01-01
Our species exhibits spectacular success due to cumulative culture. While cognitive evolution of social learning mechanisms may be partially responsible for adaptive human culture, features of early human social structure may also play a role by increasing the number potential models from which to learn innovations. We present interview data on interactions between same-sex adult dyads of Ache and Hadza hunter-gatherers living in multiple distinct residential bands (20 Ache bands; 42 Hadza bands; 1201 dyads) throughout a tribal home range. Results show high probabilities (5%–29% per year) of cultural and cooperative interactions between randomly chosen adults. Multiple regression suggests that ritual relationships increase interaction rates more than kinship, and that affinal kin interact more often than dyads with no relationship. These may be important features of human sociality. Finally, yearly interaction rates along with survival data allow us to estimate expected lifetime partners for a variety of social activities, and compare those to chimpanzees. Hadza and Ache men are estimated to observe over 300 men making tools in a lifetime, whereas male chimpanzees interact with only about 20 other males in a lifetime. High intergroup interaction rates in ancestral humans may have promoted the evolution of cumulative culture. PMID:25047714
Higginson, Drew P.
2017-08-12
Here, we describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event.more » We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10 -3 to 0.3–0.7; the upper limit corresponds to Coulomb logarithm of 20–2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.« less
Hunter-gatherer inter-band interaction rates: implications for cumulative culture.
Hill, Kim R; Wood, Brian M; Baggio, Jacopo; Hurtado, A Magdalena; Boyd, Robert T
2014-01-01
Our species exhibits spectacular success due to cumulative culture. While cognitive evolution of social learning mechanisms may be partially responsible for adaptive human culture, features of early human social structure may also play a role by increasing the number potential models from which to learn innovations. We present interview data on interactions between same-sex adult dyads of Ache and Hadza hunter-gatherers living in multiple distinct residential bands (20 Ache bands; 42 Hadza bands; 1201 dyads) throughout a tribal home range. Results show high probabilities (5%-29% per year) of cultural and cooperative interactions between randomly chosen adults. Multiple regression suggests that ritual relationships increase interaction rates more than kinship, and that affinal kin interact more often than dyads with no relationship. These may be important features of human sociality. Finally, yearly interaction rates along with survival data allow us to estimate expected lifetime partners for a variety of social activities, and compare those to chimpanzees. Hadza and Ache men are estimated to observe over 300 men making tools in a lifetime, whereas male chimpanzees interact with only about 20 other males in a lifetime. High intergroup interaction rates in ancestral humans may have promoted the evolution of cumulative culture.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Limited salvage logging effects on forest regeneration after moderate-severity windthrow.
Peterson, Chris J; Leach, Andrea D
2008-03-01
Recent conceptual advances address forest response to multiple disturbances within a brief time period, providing an ideal framework for examining the consequences of natural disturbances followed by anthropogenic management activities. The combination of two or more disturbances in a short period may produce "ecological surprises," and models predict a threshold of cumulative disturbance severity above which forest composition will be drastically altered and regeneration may be impaired. Salvage logging (the harvesting of timber after natural disturbances; also called "salvaging" or "sanitary logging") is common, but there have been no tests of the manner in which salvaging after natural wind disturbance affects woody plant regeneration. Here we present findings from three years after a moderate-severity wind disturbance in west-central Tennessee, USA. We compare two unsalvaged sites and two sites that had intermediate-intensity salvaging. Our approach demonstrates the calculation of cumulative severity measures, which combine natural windthrow severity and anthropogenic tree cutting and removal, on a plot-by-plot basis. Seedling/sapling density and species richness were not influenced by cumulative disturbance severity, but species diversity showed a marginal increase with increasing cumulative severity. The amount of compositional change (from predisturbance trees to post-disturbance seedlings/saplings) increased significantly with cumulative severity of disturbance but showed no evidence of thresholds within the severity range examined. Overall, few deleterious changes were evident in these sites. Moderate-severity natural disturbances followed by moderate-intensity salvaging may have little detrimental effect on forest regeneration and diversity in these systems; the ecological surprises and threshold compositional change are more likely after combinations of natural and anthropogenic disturbances that have a much greater cumulative severity.
Endo, S; Kimura, S; Takatsuji, T; Nanasawa, K; Imanaka, T; Shizuma, K
2012-09-01
Soil sampling was carried out at an early stage of the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident. Samples were taken from areas around FDNPP, at four locations northwest of FDNPP, at four schools and in four cities, including Fukushima City. Radioactive contaminants in soil samples were identified and measured by using a Ge detector and included (129 m)Te, (129)Te, (131)I, (132)Te, (132)I, (134)Cs, (136)Cs, (137)Cs, (140)Ba and (140)La. The highest soil depositions were measured to the northwest of FDNPP. From this soil deposition data, variations in dose rates over time and the cumulative external doses at the locations for 3 months and 1y after deposition were estimated. At locations northwest of FDNPP, the external dose rate at 3 months after deposition was 4.8-98 μSv/h and the cumulative dose for 1 y was 51 to 1.0 × 10(3)mSv; the highest values were at Futaba Yamada. At the four schools, which were used as evacuation shelters, and in the four urban cities, the external dose rate at 3 months after deposition ranged from 0.03 to 3.8μSv/h and the cumulative doses for 1 y ranged from 3 to 40 mSv. The cumulative dose at Fukushima Niihama Park was estimated as the highest in the four cities. The estimated external dose rates and cumulative doses show that careful countermeasures and remediation will be needed as a result of the accident, and detailed measurements of radionuclide deposition densities in soil will be important input data to conduct these activities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2007-02-02
glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
2014-09-01
approaches. Ecological Modelling Volume 200, Issues 1–2, 10, pp 1–19. Buhlmann, Kurt A ., Thomas S.B. Akre , John B. Iverson, Deno Karapatakis, Russell A ...statistical multivariate analysis to define the current and projected future range probability for species of interest to Army land managers. A software...15 Figure 4. RCW omission rate and predicted area as a function of the cumulative threshold
Shukla, J B; Goyal, Ashish; Singh, Shikha; Chandra, Peeyush
2014-06-01
In this paper, a non-linear model is proposed and analyzed to study the effects of habitat characteristics favoring logistically growing carrier population leading to increased spread of typhoid fever. It is assumed that the cumulative density of habitat characteristics and the density of carrier population are governed by logistic models; the growth rate of the former increases as the density of human population increases. The model is analyzed by stability theory of differential equations and computer simulation. The analysis shows that as the density of the infective carrier population increases due to habitat characteristics, the spread of typhoid fever increases in comparison with the case without such factors. Copyright © 2013 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.
Hydromorphone efficacy and treatment protocol impact on tolerance and mu-opioid receptor regulation.
Kumar, Priyank; Sunkaraneni, Soujanya; Sirohi, Sunil; Dighe, Shveta V; Walker, Ellen A; Yoburn, Byron C
2008-11-12
This study examined the antinociceptive (analgesic) efficacy of hydromorphone and hydromorphone-induced tolerance and regulation of mu-opioid receptor density. Initially s.c. hydromorphone's time of peak analgesic (tail-flick) effect (45 min) and ED50 using standard and cumulative dosing protocols (0.22 mg/kg, 0.37 mg/kg, respectively) were determined. The apparent analgesic efficacy (tau) of hydromorphone was then estimated using the operational model of agonism and the irreversible mu-opioid receptor antagonist clocinnamox. Mice were injected with clocinnamox (0.32-25.6 mg/kg, i.p.) and 24 h later, the analgesic potency of hydromorphone was determined. The tau value for hydromorphone was 35, which suggested that hydromorphone is a lower analgesic efficacy opioid agonist. To examine hydromorphone-induced tolerance, mice were continuously infused s.c. with hydromorphone (2.1-31.5 mg/kg/day) for 7 days and then morphine cumulative dose response studies were performed. Other groups of mice were injected with hydromorphone (2.2-22 mg/kg/day) once, or intermittently every 24 h for 7 days. Twenty-four hours after the last injection, mice were tested using morphine cumulative dosing studies. There was more tolerance with infusion treatments compared to intermittent treatment. When compared to higher analgesic efficacy opioids, hydromorphone infusions induced substantially more tolerance. Finally, the effect of chronic infusion (31.5 mg/kg/day) and 7 day intermittent (22 mg/kg/day) hydromorphone treatment on spinal cord mu-opioid receptor density was determined. Hydromorphone did not produce any change in mu-opioid receptor density following either treatment. These results support suggestions that analgesic efficacy is correlated with tolerance magnitude and regulation of mu-opioid receptors when opioid agonists are continuously administered. Taken together, these studies indicate that analgesic efficacy and treatment protocol are important in determining tolerance and regulation of mu-opioid receptors.
Ou, Huang-Tz; Lee, Tsung-Ying; Li, Chung-Yi; Wu, Jin-Shang; Sun, Zih-Jie
2017-06-21
To estimate the incidence densities and cumulative incidence of diabetes-related complications in patients with type 1 diabetes for a maximum of 15-year follow-up. The estimations were further stratified by gender and age at diagnosis (ie, early onset: 0-12 years, late onset:≥13 years). A population-based retrospective longitudinal cohort study. Taiwan's National Health Insurance medical claims. 4007 patients newly diagnosed with type 1 diabetes were identified during 1999-2012. Acute complications included diabetic ketoacidosis (DKA) and hypoglycaemia. Chronic complications were cardiovascular diseases (CVD), retinopathy, neuropathy and nephropathy. The incidence density of retinopathy was greatest (97.74 per 1000 person-years), followed by those of nephropathy (31.36), neuropathy (23.93) and CVD (4.39). Among acute complications, the incidence density of DKA was greatest (121.11 per 1000 person-years). The cumulative incidences of acute complications after 12 years following diagnosis were estimated to be 52.1%, 36.1% and 4.1% for DKA, outpatient hypoglycaemia and hospitalised hypoglycaemia, respectively. For chronic complications, the cumulative incidence of retinopathy after 12 years following diagnosis was greatest (65.2%), followed by those of nephropathy (30.2%), neuropathy (23.7%) and CVD (4.1%). Females with late-onset diabetes were greatly affected by advanced retinopathy (ie, sight-threatening diabetic retinopathy) and hospitalised hypoglycaemia, whereas those with early-onset diabetes were more vulnerable to DKA. Chronic complications were more commonly seen in late-onset diabetes, whereas early-onset diabetes were most affected by acute complications. Ethnic Chinese patients with type 1 diabetes were greatly affected by DKA and retinopathy. The incidence of diabetes-related complications differed by age at diagnosis and sex. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Integrating resource selection information with spatial capture--recapture
Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.
2013-01-01
4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.
ERIC Educational Resources Information Center
Gray, Shelley; Pittman, Andrea; Weinhold, Juliet
2014-01-01
Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…
ERIC Educational Resources Information Center
van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.
2016-01-01
The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…
Assessing the relationship between ad volume and awareness of a tobacco education media campaign
Modayil, Mary V; Stevens, Colleen
2010-01-01
Background The relation between aided ad recall and level of television ad placement in a public health setting is not well established. We examine this association by looking back at 8 years of the California's Tobacco Control Program's (CTCP) media campaign. Methods Starting in July 2001, California's campaign was continuously monitored using five telephone series of surveys and six web-based series of surveys immediately following a media flight. We used population-based statewide surveys to measure aided recall for advertisements that were placed in each of these media flights. Targeted rating points (TRPs) were used to measure ad placement intensity throughout the state. Results Cumulative TRPs exhibited a stronger relation with aided ad recall than flight TRPs or TRP density. This association increased after log-transforming cumulative TRP values. We found that a one-unit increase in log-cumulative TRPs led to a 13.6% increase in aided ad recall using web-based survey data, compared to a 5.3% increase in aided ad recall using telephone survey data. Conclusions In California, the relation between aided ad recall and cumulative TRPs showed a diminishing return after a large volume of ad placements These findings may be useful in planning future ad placement for CTCP's media campaign. PMID:20382649
Vacuum quantum stress tensor fluctuations: A diagonalization approach
NASA Astrophysics Data System (ADS)
Schiappacasse, Enrico D.; Fewster, Christopher J.; Ford, L. H.
2018-01-01
Large vacuum fluctuations of a quantum stress tensor can be described by the asymptotic behavior of its probability distribution. Here we focus on stress tensor operators which have been averaged with a sampling function in time. The Minkowski vacuum state is not an eigenstate of the time-averaged operator, but can be expanded in terms of its eigenstates. We calculate the probability distribution and the cumulative probability distribution for obtaining a given value in a measurement of the time-averaged operator taken in the vacuum state. In these calculations, we study a specific operator that contributes to the stress-energy tensor of a massless scalar field in Minkowski spacetime, namely, the normal ordered square of the time derivative of the field. We analyze the rate of decrease of the tail of the probability distribution for different temporal sampling functions, such as compactly supported functions and the Lorentzian function. We find that the tails decrease relatively slowly, as exponentials of fractional powers, in agreement with previous work using the moments of the distribution. Our results lend additional support to the conclusion that large vacuum stress tensor fluctuations are more probable than large thermal fluctuations, and may have observable effects.
Evaluation of an Ensemble Dispersion Calculation.
NASA Astrophysics Data System (ADS)
Draxler, Roland R.
2003-02-01
A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.
Pfleger, C C H; Flachs, E M; Koch-Henriksen, Nils
2010-07-01
There is a need for follow-up studies of the familial situation of multiple sclerosis (MS) patients. To evaluate the probability of MS patients to remain in marriage or relationship with the same partner after onset of MS in comparison with the population. All 2538 Danes with onset of MS 1980-1989, retrieved from the Danish MS-Registry, and 50,760 matched and randomly drawn control persons were included. Information on family status was retrieved from Statistics Denmark. Cox analyses were used with onset as starting point. Five years after onset, the cumulative probability of remaining in the same relationship was 86% in patients vs. 89% in controls. The probabilities continued to deviate, and at 24 years, the probability was 33% in patients vs. 53% in the control persons (p < 0.001). Among patients with young onset (< 36 years of age), those with no children had a higher risk of divorce than those having children less than 7 years (Hazard Ratio 1.51; p < 0.0001), and men had a higher risk of divorce than women (Hazard Ratio 1.33; p < 0.01). MS significantly affects the probability of remaining in the same relationship compared with the background population.
Statistics of primordial density perturbations from discrete seed masses
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.; Bertschinger, Edmund
1991-01-01
The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.
The Effects of Framing, Reflection, Probability, and Payoff on Risk Preference in Choice Tasks.
Kühberger; Schulte-Mecklenbeck; Perner
1999-06-01
A meta-analysis of Asian-disease-like studies is presented to identify the factors which determine risk preference. First the confoundings between probability levels, payoffs, and framing conditions are clarified in a task analysis. Then the role of framing, reflection, probability, type, and size of payoff is evaluated in a meta-analysis. It is shown that bidirectional framing effects exist for gains and for losses. Presenting outcomes as gains tends to induce risk aversion, while presenting outcomes as losses tends to induce risk seeking. Risk preference is also shown to depend on the size of the payoffs, on the probability levels, and on the type of good at stake (money/property vs human lives). In general, higher payoffs lead to increasing risk aversion. Higher probabilities lead to increasing risk aversion for gains and to increasing risk seeking for losses. These findings are confirmed by a subsequent empirical test. Shortcomings of existing formal theories, such as prospect theory, cumulative prospect theory, venture theory, and Markowitz's utility theory, are identified. It is shown that it is not probabilities or payoffs, but the framing condition, which explains most variance. These findings are interpreted as showing that no linear combination of formally relevant predictors is sufficient to capture the essence of the framing phenomenon. Copyright 1999 Academic Press.
Estimating soil moisture exceedance probability from antecedent rainfall
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.
2016-12-01
The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
Assessing hypotheses about nesting site occupancy dynamics
Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle
2011-01-01
Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.
NASA Technical Reports Server (NTRS)
Orren, L. H.; Ziman, G. M.; Jones, S. C.
1981-01-01
A financial accounting model that incorporates physical and institutional uncertainties was developed for geothermal projects. Among the uncertainties it can handle are well depth, flow rate, fluid temperature, and permit and construction times. The outputs of the model are cumulative probability distributions of financial measures such as capital cost, levelized cost, and profit. These outputs are well suited for use in an investment decision incorporating risk. The model has the powerful feature that conditional probability distribution can be used to account for correlations among any of the input variables. The model has been applied to a geothermal reservoir at Heber, California, for a 45-MW binary electric plant. Under the assumptions made, the reservoir appears to be economically viable.
Mercader, R J; Siegert, N W; McCullough, D G
2012-02-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.
On Schrödinger's bridge problem
NASA Astrophysics Data System (ADS)
Friedland, S.
2017-11-01
In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.
H-index of Collective Health professors in Brazil.
Pereira, Julio Cesar Rodrigues; Bronhara, Bruna
2011-06-01
To estimate reference values and the hierarchy function of professors engaged in Collective Health in Brazil by analyzing the distribution of the h-index. From the Portal da Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Portal of Coordination for the Improvement of Higher Education Personnel ), 934 authors were identified in 2008, of whom 819 were analyzed. The h-index of each professor was obtained through the Web of Science using search algorithms controlling for namesakes and alternative spellings of their names. For each Brazilian region and for the country as a whole, we adjusted an exponential probability density function to provide the population parameters and rate of decline by region. Ranking measures were identified using the complement of the cumulative probability function and the hierarchy function among authors according to the h-index by region. Among the professors analyzed, 29.8% had no citation record in Web of Science (h=0). The mean h for the country was 3.1, and the region with greatest mean was the southern region (h=4.7). The median h for the country was 3.1, and the greatest median was for the southern region (3.2). Standardizing populations to one hundred, the first rank in the country was h=16, but stratification by region shows that, within the northeastern, southeastern and southern regions, a greater value is necessary for achieving the first rank. In the southern region, the index needed to achieve the first rank was h=24. Most of the Brazilian Collective Health authors, if assessed on the basis of the Web of Science h-index, did not exceed h=5. Regional differences exist, with the southeastern and northeastern regions being similar and the southern region being outstanding.
Density probability distribution functions of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2008-10-01
In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Shen, Fuhai; Yuan, Juxiang; Sun, Zhiqian; Hua, Zhengbing; Qin, Tianbang; Yao, Sanqiao; Fan, Xueyun; Chen, Weihong; Liu, Hongbo; Chen, Jie
2013-01-01
Prior to 1970, coal mining technology and prevention measures in China were poor. Mechanized coal mining equipment and advanced protection measures were continuously installed in the mines after 1970. All these improvements may have resulted in a change in the incidence of coal workers' pneumoconiosis (CWP). Therefore, it is important to identify the characteristics of CWP today and trends for the incidence of CWP in the future. A total of 17,023 coal workers from the Kailuan Colliery Group were studied. A life-table method was used to calculate the cumulative incidence rate of CWP and predict the number of new CWP patients in the future. The probability of developing CWP was estimated by a multilayer perceptron artificial neural network for each coal worker without CWP. The results showed that the cumulative incidence rates of CWP for tunneling, mining, combining, and helping workers were 31.8%, 27.5%, 24.2%, and 2.6%, respectively, during the same observation period of 40 years. It was estimated that there would be 844 new CWP cases among 16,185 coal workers without CWP within their life expectancy. There would be 273.1, 273.1, 227.6, and 69.9 new CWP patients in the next <10, 10-, 20-, and 30- years respectively in the study cohort within their life expectancy. It was identified that coal workers whose risk probabilities were over 0.2 were at high risk for CWP, and whose risk probabilities were under 0.1 were at low risk. The present and future incidence trends of CWP remain high among coal workers. We suggest that coal workers at high risk of CWP undergo a physical examination for pneumoconiosis every year, and the coal workers at low risk of CWP be examined every 5 years.
Lodi, Sara; Phillips, Andrew; Fidler, Sarah; Hawkins, David; Gilson, Richard; McLean, Ken; Fisher, Martin; Post, Frank; Johnson, Anne M.; Walker-Nthenda, Louise; Dunn, David; Porter, Kholoud
2013-01-01
Background The development of HIV drug resistance and subsequent virological failure are often cited as potential disadvantages of early cART initiation. However, their long-term probability is not known, and neither is the role of duration of infection at the time of initiation. Methods Patients enrolled in the UK Register of HIV seroconverters were followed-up from cART initiation to last HIV-RNA measurement. Through survival analysis we examined predictors of virologic failure (2HIV-RNA ≥400 c/l while on cART) including CD4 count and HIV duration at initiation. We also estimated the cumulative probabilities of failure and drug resistance (from the available HIV nucleotide sequences) for early initiators (cART within 12 months of seroconversion). Results Of 1075 starting cART at a median (IQR) CD4 count 272 (190,370) cells/mm3 and HIV duration 3 (1,6) years, virological failure occurred in 163 (15%). Higher CD4 count at initiation, but not HIV infection duration at cART initiation, was independently associated with lower risk of failure (p=0.033 and 0.592 respectively). Among 230 patients initiating cART early, 97 (42%) discontinued it after a median of 7 months; cumulative probabilities of resistance and failure by 8 years were 7% (95% CI 4,11) and 19% (13,25), respectively. Conclusion Although the rate of discontinuation of early cART in our cohort was high, the long-term rate of virological failure was low. Our data do not support early cART initiation being associated with increased risk of failure and drug resistance. PMID:24086588
A zonation technique for landslide susceptibility in southern Taiwan
NASA Astrophysics Data System (ADS)
Chiang, Jie-Lun; Tian, Yu-Qing; Chen, Yie-Ruey; Tsai, Kuang-Jung
2016-04-01
In recent years, global climate changes violently, extreme rainfall events occur frequently and also cause massive sediment related disasters in Taiwan. The disaster seriously hit the regional economic development and national infrastructures. For example, in August, 2009, the typhoon Morakot brought massive rainfall especially in the mountains in Chiayi County and Kaohsiung County in which the cumulative maximum rainfall was up to 2900 mm; meanwhile, the cumulative maximum rainfall was over 1500m.m. in Nantou County, Tainan County and Pingtung County. The typhoon caused severe damage in southern Taiwan. The study will search for the influence on the sediment hazards caused by the extreme rainfall and hydrological environmental changes focusing on southern Taiwan (including Chiayi, Tainan, Kaohsiung and Pingtung). The instability index and kriging theories are applied to analyze the factors of landslide to determine the susceptibility in southern Taiwan. We collected the landslide records during the period year, 2007~2013 and analyzed the instability factors including elevation, slope, aspect, soil, and geology. Among these factors, slope got the highest weight. The steeper the slope is, the more the landslides occur. As for the factor of aspect, the highest probability falls on the Southwest. However, this factor has the lowest weight among all the factors. Likewise, Darkish colluvial soil holds the highest probability of collapses among all the soils. Miocene middle Ruifang group and its equivalents have the highest probability of collapses among all the geologies. In this study, Kriging was used to establish the susceptibility map in southern Taiwan. The instability index above 4.21 can correspond to those landslide records. The potential landslide area in southern Taiwan, where collapses more likely occur, belongs to high level and medium-high level; the area is 5.12% and 17.81% respectively.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirunyan, Albert M; et al.
Event-by-event fluctuations in the elliptic-flow coefficientmore » $$v_2$$ are studied in PbPb collisions at $$\\sqrt{s_{_\\text{NN}}} = 5.02$$ TeV using the CMS detector at the CERN LHC. Elliptic-flow probability distributions $${p}(v_2)$$ for charged particles with transverse momentum 0.3$$< p_\\mathrm{T} <$$3.0 GeV and pseudorapidity $$| \\eta | <$$ 1.0 are determined for different collision centrality classes. The moments of the $${p}(v_2)$$ distributions are used to calculate the $$v_{2}$$ coefficients based on cumulant orders 2, 4, 6, and 8. A rank ordering of the higher-order cumulant results and nonzero standardized skewness values obtained for the $${p}(v_2)$$ distributions indicate non-Gaussian initial-state fluctuation behavior. Bessel-Gaussian and elliptic power fits to the flow distributions are studied to characterize the initial-state spatial anisotropy.« less
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Technical Reports Server (NTRS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-01-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Astrophysics Data System (ADS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-03-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Turtle Bayou - 1936 to 1983: case history of a major gas field in south Louisiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cronquist, C.
1983-01-01
Turtle Bayou field, located in the middle Miocene trend in S. Louisiana, is nearing the end of a productive life which spans over 30 yr. Discovered by Shell Oil Co. in 1949 after unsuccessful attempts by 2 other majors, the field is a typical, low relief, moderately faulted Gulf Coast structure, probably associated with deep salt movement. The productive interval includes 22 separate gas-bearing sands in a regressive sequence of sands and shales from approx. 6500 to 12,000 ft. Now estimated to have contained ca 1.2 trillion scf of gas in place, cumulative production through 1982 was 702 billion scf.more » Cumulative condensate-gas ratio has been 20 bbl/million. Recovery mechanisms in individual reservoirs include strong bottom water drive, partial edgewater drive, and pressure depletion. Recovery efficiencies in major reservoirs range from 40 to 75% of original gas in place.« less
Quantum Dynamics Study of the Isotopic Effect on Capture Reactions: HD, D2 + CH3
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
Time-dependent wave-packet-propagation calculations are reported for the isotopic reactions, HD + CH3 and D2 + CH3, in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probabilities for different initial rotational-vibrational states are presented in this study. This study shows that excitations of the HD(D2) enhances the reactivities; whereas the excitations of the CH3 umbrella mode have the opposite effects. This is consistent with the reaction of H2 + CH3. The comparison of these three isotopic reactions also shows the isotopic effects in the initial-state-selected reaction probabilities. The cumulative reaction probabilities (CRP) are obtained by summing over initial-state-selected reaction probabilities. The energy-shift approximation to account for the contribution of degrees of freedom missing in the six dimensionality calculation is employed to obtain approximate full-dimensional CRPs. The rate constant comparison shows H2 + CH3 reaction has the biggest reactivity, then HD + CH3, and D2 + CH3 has the smallest.
Cumulative effects of electrode and dielectric surface modifications on pentacene-based transistors
NASA Astrophysics Data System (ADS)
Devynck, Mélanie; Tardy, Pascal; Wantz, Guillaume; Nicolas, Yohann; Vellutini, Luc; Labrugère, Christine; Hirsch, Lionel
2012-01-01
Surface modifications of the dielectric and the metal of pentacene-based field effect transistors using self-assembled monolayer (SAM) were studied. First, a low interfacial trap density and pentacene 2D-growth were favored by the nonpolar and low surface energy of octadecyltrichlorosilane-based SAM. This treatment leaded to increased mobility up to 0.4 cm2 V-1 s-1 and no observable hysteresis on transfer curves. Second, reduced hole injection barrier and contact resistance were achieved by fluorinated thiols deposited on gold contacts resulting in an increased mobility up to 0.6 cm2 V-1 s-1. Finally, a high mobility of 2.6 cm2 V-1 s-1 was achieved by cumulative effects of both treatments.
Blume-Peytavi, Ulrike; Kunte, Christian; Krisp, Andreas; Garcia Bartels, Natalie; Ellwanger, Ulf; Hoffmann, Rolf
2007-05-01
Two drugs which are approved for the treatment of androgenetic alopecia in women in Germany were compared with regard to their influence on hair growth. Patients were randomized to group I (n = 52) who used 2% minoxidil solution twice daily for a period of 12 months or to group II (n = 51) who used 0.025% alfatradiol solution once daily for 6 months and were then switched to 2% minoxidil solution for months 7-12. Changes in hair growth parameters were determined using the TrichoScan. Topical treatment with 2% minoxidil solution for 6 months resulted in a significant increase of cumulative hair thickness (p < 0.0001) and absolute hair density (p < or = 0.0025), whereas these parameters of hair growth remained nearly unchanged after 6 months of treatment with alfatradiol solution. Evaluation of the same parameters from month 7 to month 12 demonstrated that 12 months minoxidil treatment resulted in an increasing stabilization (group I). After the alfatradiol-->minoxidil switch in group II a significant increase in cumulative hair thickness (p < 0.0001) and absolute hair density (p < 0.0001) was achieved. Both study medications were well tolerated. Treatment with minoxidil can induce an increase in hair density and hair thickness,whereas treatment with alfatradiol results in deceleration or stabilization of hair loss.
Fractional Brownian motion with a reflecting wall.
Wada, Alexander H O; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
NASA Astrophysics Data System (ADS)
Pendergrass, W.; Vogel, C. A.
2013-12-01
As an outcome of discussions between Duke Energy Generation and NOAA/ARL following the 2009 AMS Summer Community Meeting, in Norman Oklahoma, ARL and Duke Energy Generation (Duke) signed a Cooperative Research and Development Agreement (CRADA) which allows NOAA to conduct atmospheric boundary layer (ABL) research using Duke renewable energy sites as research testbeds. One aspect of this research has been the evaluation of forecast hub-height winds from three NOAA atmospheric models. Forecasts of 10m (surface) and 80m (hub-height) wind speeds from (1) NOAA/GSD's High Resolution Rapid Refresh (HRRR) model, (2) NOAA/NCEP's 12 km North America Model (NAM12) and (3) NOAA/NCEP's 4k high resolution North America Model (NAM4) were evaluated against 18 months of surface-layer wind observations collected at the joint NOAA/Duke Energy research station located at Duke Energy's West Texas Ocotillo wind farm over the period April 2011 through October 2012. HRRR, NAM12 and NAM4 10m wind speed forecasts were compared with 10m level wind speed observations measured on the NOAA/ATDD flux-tower. Hub-height (80m) HRRR , NAM12 and NAM4 forecast wind speeds were evaluated against the 80m operational PMM27-28 meteorological tower supporting the Ocotillo wind farm. For each HRRR update, eight forecast hours (hour 01, 02, 03, 05, 07, 10, 12, 15) plus the initialization hour (hour 00), evaluated. For the NAM12 and NAM4 models forecast hours 00-24 from the 06z initialization were evaluated. Performance measures or skill score based on absolute error 50% cumulative probability were calculated for each forecast hour. HRRR forecast hour 01 provided the best skill score with an absolute wind speed error within 0.8 m/s of observed 10m wind speed and 1.25 m/s for hub-height wind speed at the designated 50% cumulative probability. For both NAM4 and NAM12 models, skill scores were diurnal with comparable best scores observed during the day of 0.7 m/s of observed 10m wind speed and 1.1 m/s for hub-height wind speed at the designated 50% cumulative probability level.
The long-term changes in total ozone, as derived from Dobson measurements at Arosa (1948-2001)
NASA Astrophysics Data System (ADS)
Krzyscin, J. W.
2003-04-01
The longest possible total ozone time series (Arosa, Switzerland) is examined for a detection of trends. Two-step procedure is proposed to estimate the long-term (decadal) variations in the ozone time series. The first step consists of a standard least-squares multiple regression applied to the total ozone monthly means to parameterize "natural" (related to the oscillations in the atmospheric dynamics) variations in the analyzed time series. The standard proxies for the dynamical ozone variations are used including; the 11-year solar activity cycle, and indices of QBO, ENSO and NAO. We use the detrended time series of temperature at 100 hPa and 500 hPa over Arosa to parameterize short-term variations (with time periods<1 year) in total ozone related to local changes in the meteorological conditions over the station. The second step consists of a smooth-curve fitting to the total ozone residuals (original minus modeled "natural" time series), the time derivation applied to this curve to obtain local trends, and bootstrapping of the residual time series to estimate the standard error of local trends. Locally weighted regression and the wavelet analysis methodology are used to extract the smooth component out of the residual time series. The time integral over the local trend values provides the cumulative long-term change since the data beginning. Examining the pattern of the cumulative change we see the periods with total ozone loss (the end of 50s up to early 60s - probably the effect of the nuclear bomb tests), recovery (mid 60s up to beginning of 70s), apparent decrease (beginning of 70s lasting to mid 90s - probably the effect of the atmosphere contamination by anthropogenic substances containing chlorine), and with a kind of stabilization or recovery (starting in the mid of 90s - probably the effect of the Montreal protocol to eliminate substances reducing the ozone layer). We can also estimate that a full ozone recovery (return to the undisturbed total ozone level from the beginning of 70s) is expected around 2050. We propose to calculate both time series of local trends and the cumulative long-term change instead single trend value derived as a slope of straight line fit to the data.
Bressler, Susan B; Qin, Haijing; Melia, Michele; Bressler, Neil M; Beck, Roy W; Chan, Clement K; Grover, Sandeep; Miller, David G
2013-08-01
The standard care for proliferative diabetic retinopathy (PDR) usually is panretinal photocoagulation, an inherently destructive treatment that can cause iatrogenic vision loss. Therefore, evaluating the effects of therapies for diabetic macular edema on development or worsening of PDR might lead to new therapies for PDR. To evaluate the effects of intravitreal ranibizumab or triamcinolone acetonide, administered to treat diabetic macular edema, on worsening of diabetic retinopathy. Exploratory analysis was performed on worsening of retinopathy, defined as 1 or more of the following: (1) worsening from no PDR to PDR, (2) worsening of 2 or more severity levels on reading center assessment of fundus photographs in eyes without PDR at baseline, (3) having panretinal photocoagulation, (4) experiencing vitreous hemorrhage, or (5) undergoing vitrectomy for the treatment of PDR. Community- and university-based ophthalmology practices. Individuals with central-involved diabetic macular edema causing visual acuity impairment. Eyes were assigned randomly to sham with prompt focal/grid laser, 0.5 mg of intravitreal ranibizumab with prompt or deferred (≥24 weeks) laser, or 4 mg of intravitreal triamcinolone acetonide with prompt laser. Three-year cumulative probabilities for retinopathy worsening. For eyes without PDR at baseline, the 3-year cumulative probabilities for retinopathy worsening (P value comparison with sham with prompt laser) were 23% using sham with prompt laser, 18% with ranibizumab with prompt laser (P = .25), 7% with ranibizumab with deferred laser (P = .001), and 37% with triamcinolone with prompt laser (P = .10). For eyes with PDR at baseline, the 3-year cumulative probabilities for retinopathy worsening were 40%, 21% (P = .05), 18% (P = .02), and 12% (P < .001), respectively. CONCLUSIONS AND RELEVANCE Intravitreal ranibizumab appears to be associated with a reduced risk of diabetic retinopathy worsening in eyes with or without PDR. Intravitreal triamcinolone also appears to be associated with a reduced risk of PDR worsening. These findings suggest that use of these drugs to prevent worsening of diabetic retinopathy may be feasible. Given the exploratory nature of these analyses, the risk of endophthalmitis following intravitreal injections, and the fact that intravitreal triamcinolone can cause cataract or glaucoma, use of these treatments to reduce the rates of worsening of retinopathy, with or without PDR, does not seem warranted at this time.
USDA-ARS?s Scientific Manuscript database
High density orchard systems have become the standard for new plantings in many apple production regions due to their earlier yield and higher cumulative yields which results in greater return on investments. Growers in the Mid-Atlantic region have unique challenges compared to northern production r...
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.
2011-01-01
Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne
2011-01-01
Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.
Randomized path optimization for thevMitigated counter detection of UAVS
2017-06-01
using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We
Korman, Josh; Yard, Mike
2017-01-01
Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.
Wavefronts, actions and caustics determined by the probability density of an Airy beam
NASA Astrophysics Data System (ADS)
Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón
2018-07-01
The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.
Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.
Guo, Lian; Radisic, Aleksandar; Searson, Peter C
2005-12-22
Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.
NASA Astrophysics Data System (ADS)
Giblin, Jay P.; Dixon, John; Dupuis, Julia R.; Cosofret, Bogdan R.; Marinelli, William J.
2017-05-01
Sensor technologies capable of detecting low vapor pressure liquid surface contaminants, as well as solids, in a noncontact fashion while on-the-move continues to be an important need for the U.S. Army. In this paper, we discuss the development of a long-wave infrared (LWIR, 8-10.5 μm) spatial heterodyne spectrometer coupled with an LWIR illuminator and an automated detection algorithm for detection of surface contaminants from a moving vehicle. The system is designed to detect surface contaminants by repetitively collecting LWIR reflectance spectra of the ground. Detection and identification of surface contaminants is based on spectral correlation of the measured LWIR ground reflectance spectra with high fidelity library spectra and the system's cumulative binary detection response from the sampled ground. We present the concepts of the detection algorithm through a discussion of the system signal model. In addition, we present reflectance spectra of surfaces contaminated with a liquid CWA simulant, triethyl phosphate (TEP), and a solid simulant, acetaminophen acquired while the sensor was stationary and on-the-move. Surfaces included CARC painted steel, asphalt, concrete, and sand. The data collected was analyzed to determine the probability of detecting 800 μm diameter contaminant particles at a 0.5 g/m2 areal density with the SHSCAD traversing a surface.
Statistics of the geomagnetic secular variation for the past 5Ma
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1986-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
Isostatic Gravity Map with Geology of the Santa Ana 30' x 60' Quadrangle, Southern California
Langenheim, V.E.; Lee, Tien-Chang; Biehler, Shawn; Jachens, R.C.; Morton, D.M.
2006-01-01
This report presents an updated isostatic gravity map, with an accompanying discussion of the geologic significance of gravity anomalies in the Santa Ana 30 by 60 minute quadrangle, southern California. Comparison and analysis of the gravity field with mapped geology indicates the configuration of structures bounding the Los Angeles Basin, geometry of basins developed within the Elsinore and San Jacinto Fault zones, and a probable Pliocene drainage network carved into the bedrock of the Perris block. Total cumulative horizontal displacement on the Elsinore Fault derived from analysis of the length of strike-slip basins within the fault zone is about 5-12 km and is consistent with previously published estimates derived from other sources of information. This report also presents a map of density variations within pre-Cenozoic metamorphic and igneous basement rocks. Analysis of basement gravity patterns across the Elsinore Fault zone suggests 6-10 km of right-lateral displacement. A high-amplitude basement gravity high is present over the San Joaquin Hills and is most likely caused by Peninsular Ranges gabbro and/or Tertiary mafic intrusion. A major basement gravity gradient coincides with the San Jacinto Fault zone and marked magnetic, seismic-velocity, and isotopic gradients that reflect a discontinuity within the Peninsular Ranges batholith in the northeast corner of the quadrangle.
Diversity Performance Analysis on Multiple HAP Networks
Dong, Feihong; Li, Min; Gong, Xiangwu; Li, Hongjun; Gao, Fengyue
2015-01-01
One of the main design challenges in wireless sensor networks (WSNs) is achieving a high-data-rate transmission for individual sensor devices. The high altitude platform (HAP) is an important communication relay platform for WSNs and next-generation wireless networks. Multiple-input multiple-output (MIMO) techniques provide the diversity and multiplexing gain, which can improve the network performance effectively. In this paper, a virtual MIMO (V-MIMO) model is proposed by networking multiple HAPs with the concept of multiple assets in view (MAV). In a shadowed Rician fading channel, the diversity performance is investigated. The probability density function (PDF) and cumulative distribution function (CDF) of the received signal-to-noise ratio (SNR) are derived. In addition, the average symbol error rate (ASER) with BPSK and QPSK is given for the V-MIMO model. The system capacity is studied for both perfect channel state information (CSI) and unknown CSI individually. The ergodic capacity with various SNR and Rician factors for different network configurations is also analyzed. The simulation results validate the effectiveness of the performance analysis. It is shown that the performance of the HAPs network in WSNs can be significantly improved by utilizing the MAV to achieve overlapping coverage, with the help of the V-MIMO techniques. PMID:26134102
Design of ceramic components with the NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.
1990-01-01
The ceramics analysis and reliability evaluation of structures (CARES) computer program is described. The primary function of the code is to calculate the fast-fracture reliability or failure probability of macro-scopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. CARES uses results from MSC/NASTRAN or ANSYS finite-element analysis programs to evaluate how inherent surface and/or volume type flaws component reliability. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effects of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for a single or multiple failure modes by using a least-squares analysis or a maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-to-fit-tests, 90 percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan 90 percent confidence band values are also provided. Examples are provided to illustrate the various features of CARES.
Treatment of osteoporosis and hip fractures in a Spanish health area.
Briongos, L; Sañudo, S; García-Alonso, M; Ruiz-Mambrilla, M; Dueñas-Laita, A; Pérez-Castrillón, J L
2013-01-01
The aim of this longitudinal retrospective ecological study was to evaluate the consumption of anti-osteoporotic medications and the evolution of pertrochanteric and femoral neck (FN), subtrochanteric and diaphyseal hip fractures between 2005 and 2010. Data were obtained from our Hospital Admissions Service (absolute number of fractures) and the Technical Directorate of Pharmacy (defined daily dose and absolute number of containers consumed of bisphosphonates (BP), raloxifene and strontium ranelate). The overall incidence density of FN in 2005-2010 was 124.8 new cases per 100,000 persons per year. BP consumption increased between 2005 and 2010 to a peak of 70,452 containers consumed in 2010, while consumption of raloxifene declined. The number of subtrochanteric and diaphyseal fractures remained stable, but FN reached a peak in 2008 (N = 350) and fell thereafter (N = 284 in 2010). The percentage reduction in the number of FN in the period studied (2009: -14% and 2010: -11% compared to 2005) corresponds temporally with the increased consumption of BP (2009: +76% and 2010: +84% compared to 2005). We found an inverse temporal association between the annual consumption of BP and the annual number of FN during 2005-2010. This is probably related to the cumulative effect of BP, although, given the limitations of the study design, other studies are needed to confirm our data.
Statistics of the geomagnetic secular variation for the past 5 m.y
NASA Technical Reports Server (NTRS)
Constable, C. G.; Parker, R. L.
1988-01-01
A new statistical model is proposed for the geomagnetic secular variation over the past 5Ma. Unlike previous models, the model makes use of statistical characteristics of the present day geomagnetic field. The spatial power spectrum of the non-dipole field is consistent with a white source near the core-mantle boundary with Gaussian distribution. After a suitable scaling, the spherical harmonic coefficients may be regarded as statistical samples from a single giant Gaussian process; this is the model of the non-dipole field. The model can be combined with an arbitrary statistical description of the dipole and probability density functions and cumulative distribution functions can be computed for declination and inclination that would be observed at any site on Earth's surface. Global paleomagnetic data spanning the past 5Ma are used to constrain the statistics of the dipole part of the field. A simple model is found to be consistent with the available data. An advantage of specifying the model in terms of the spherical harmonic coefficients is that it is a complete statistical description of the geomagnetic field, enabling us to test specific properties for a general description. Both intensity and directional data distributions may be tested to see if they satisfy the expected model distributions.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Jambrina, P. G.; Lara, Manuel; Menéndez, M.; Launay, J.-M.; Aoiz, F. J.
2012-10-01
Cumulative reaction probabilities (CRPs) at various total angular momenta have been calculated for the barrierless reaction S(1D) + H2 → SH + H at total energies up to 1.2 eV using three different theoretical approaches: time-independent quantum mechanics (QM), quasiclassical trajectories (QCT), and statistical quasiclassical trajectories (SQCT). The calculations have been carried out on the widely used potential energy surface (PES) by Ho et al. [J. Chem. Phys. 116, 4124 (2002), 10.1063/1.1431280] as well as on the recent PES developed by Song et al. [J. Phys. Chem. A 113, 9213 (2009), 10.1021/jp903790h]. The results show that the differences between these two PES are relatively minor and mostly related to the different topologies of the well. In addition, the agreement between the three theoretical methodologies is good, even for the highest total angular momenta and energies. In particular, the good accordance between the CRPs obtained with dynamical methods (QM and QCT) and the statistical model (SQCT) indicates that the reaction can be considered statistical in the whole range of energies in contrast with the findings for other prototypical barrierless reactions. In addition, total CRPs and rate coefficients in the range of 20-1000 K have been calculated using the QCT and SQCT methods and have been found somewhat smaller than the experimental total removal rates of S(1D).
Zecca, Giovanni; Minuto, Luigi
2016-01-01
Quaternary glaciations and mostly last glacial maximum have shaped the contemporary distribution of many species in the Alps. However, in the Maritime and Ligurian Alps a more complex picture is suggested by the presence of many Tertiary paleoendemisms and by the divergence time between lineages in one endemic species predating the Late Pleistocene glaciation. The low number of endemic species studied limits the understanding of the processes that took place within this region. We used species distribution models and phylogeographical methods to infer glacial refugia and to reconstruct the phylogeographical pattern of Silene cordifolia All. and Viola argenteria Moraldo & Forneris. The predicted suitable area for last glacial maximum roughly fitted current known distribution. Our results suggest that separation of the major clades predates the last glacial maximum and the following repeated glacial and interglacial periods probably drove differentiations. The complex phylogeographical pattern observed in the study species suggests that both populations and genotypes extinction was minimal during the last glacial maximum, probably due to the low impact of glaciations and to topographic complexity in this area. This study underlines the importance of cumulative effect of previous glacial cycles in shaping the genetic structure of plant species in Maritime and Ligurian Alps, as expected for a Mediterranean mountain region more than for an Alpine region. PMID:27870888
Reoccupation of floodplains by rivers and its relation to the age structure of floodplain vegetation
Konrad, Christopher P.
2012-01-01
River channel dynamics over many decades provide a physical control on the age structure of floodplain vegetation as a river occupies and abandons locations. Floodplain reoccupation by a river, in particular, determines the interval of time during which vegetation can establish and mature. A general framework for analyzing floodplain reoccupation and a time series model are developed and applied to five alluvial rivers in the United States. Channel dynamics in these rivers demonstrate time-scale dependence with short-term oscillation in active channel area in response to floods and subsequent vegetation growth and progressive lateral movement that accounts for much of the cumulative area occupied by the rivers over decades. Rivers preferentially reoccupy locations recently abandoned causing a decreasing probability of reoccupation with time since abandonment. For a typical case, a river is 10 times more likely to reoccupy an area it abandoned in the past decade than it is to reoccupy an area it abandoned 30 yrs ago. The decreasing probability of reoccupation over time is consistent with observations of persistent stands of late seral stage floodplain forest. A power function provides a robust approach for estimating the cumulative area occupied by a river and the age structure of riparian forests resulting from a specific historical sequence of streamflow in comparison to either linear or exponential alternatives.
Bolch, Charlotte A; Chu, Haitao; Jarosek, Stephanie; Cole, Stephen R; Elliott, Sean; Virnig, Beth
2017-07-10
To illustrate the 10-year risks of urinary adverse events (UAEs) among men diagnosed with prostate cancer and treated with different types of therapy, accounting for the competing risk of death. Prostate cancer is the second most common malignancy among adult males in the United States. Few studies have reported the long-term post-treatment risk of UAEs and those that have, have not appropriately accounted for competing deaths. This paper conducts an inverse probability of treatment (IPT) weighted competing risks analysis to estimate the effects of different prostate cancer treatments on the risk of UAE, using a matched-cohort of prostate cancer/non-cancer control patients from the Surveillance, Epidemiology and End Results (SEER) Medicare database. Study dataset included men age 66 years or older that are 83% white and had a median follow-up time of 4.14 years. Patients that underwent combination radical prostatectomy and external beam radiotherapy experienced the highest risk of UAE (IPT-weighted competing risks: HR 3.65 with 95% CI (3.28, 4.07); 10-yr. cumulative incidence = 36.5%). Findings suggest that IPT-weighted competing risks analysis provides an accurate estimator of the cumulative incidence of UAE taking into account the competing deaths as well as measured confounding bias.
Oak regeneration and overstory density in the Missouri Ozarks
David R. Larsen; Monte A. Metzger
1997-01-01
Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1995-01-01
For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.
Propensity, Probability, and Quantum Theory
NASA Astrophysics Data System (ADS)
Ballentine, Leslie E.
2016-08-01
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.
Orbital State Uncertainty Realism
NASA Astrophysics Data System (ADS)
Horwood, J.; Poore, A. B.
2012-09-01
Fundamental to the success of the space situational awareness (SSA) mission is the rigorous inclusion of uncertainty in the space surveillance network. The *proper characterization of uncertainty* in the orbital state of a space object is a common requirement to many SSA functions including tracking and data association, resolution of uncorrelated tracks (UCTs), conjunction analysis and probability of collision, sensor resource management, and anomaly detection. While tracking environments, such as air and missile defense, make extensive use of Gaussian and local linearity assumptions within algorithms for uncertainty management, space surveillance is inherently different due to long time gaps between updates, high misdetection rates, nonlinear and non-conservative dynamics, and non-Gaussian phenomena. The latter implies that "covariance realism" is not always sufficient. SSA also requires "uncertainty realism"; the proper characterization of both the state and covariance and all non-zero higher-order cumulants. In other words, a proper characterization of a space object's full state *probability density function (PDF)* is required. In order to provide a more statistically rigorous treatment of uncertainty in the space surveillance tracking environment and to better support the aforementioned SSA functions, a new class of multivariate PDFs are formulated which more accurately characterize the uncertainty of a space object's state or orbit. The new distribution contains a parameter set controlling the higher-order cumulants which gives the level sets a distinctive "banana" or "boomerang" shape and degenerates to a Gaussian in a suitable limit. Using the new class of PDFs within the general Bayesian nonlinear filter, the resulting filter prediction step (i.e., uncertainty propagation) is shown to have the *same computational cost as the traditional unscented Kalman filter* with the former able to maintain a proper characterization of the uncertainty for up to *ten times as long* as the latter. The filter correction step also furnishes a statistically rigorous *prediction error* which appears in the likelihood ratios for scoring the association of one report or observation to another. Thus, the new filter can be used to support multi-target tracking within a general multiple hypothesis tracking framework. Additionally, the new distribution admits a distance metric which extends the classical Mahalanobis distance (chi^2 statistic). This metric provides a test for statistical significance and facilitates single-frame data association methods with the potential to easily extend the covariance-based track association algorithm of Hill, Sabol, and Alfriend. The filtering, data fusion, and association methods using the new class of orbital state PDFs are shown to be mathematically tractable and operationally viable.
Demidenko, Eugene
2017-09-01
The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
The cumulative effect of consecutive winters' snow depth on moose and deer populations: a defence
McRoberts, R.E.; Mech, L.D.; Peterson, R.O.
1995-01-01
1. L. D. Mech et al. presented evidence that moose Alces alces and deer Odocoileus virginianus population parameters re influenced by a cumulative effect of three winters' snow depth. They postulated that snow depth affects adult ungulates cumulatively from winter to winter and results in measurable offspring effects after the third winter. 2. F. Messier challenged those findings and claimed that the population parameters studied were instead affected by ungulate density and wolf indexes. 3. This paper refutes Messier's claims by demonstrating that his results were an artifact of two methodological errors. The first was that, in his main analyses, Messier used only the first previous winter's snow depth rather than the sum of the previous three winters' snow depth, which was the primary point of Mech et al. Secondly, Messier smoothed the ungulate population data, which removed 22-51% of the variability from the raw data. 4. When we repeated Messier's analyses on the raw data and using the sum of the previous three winter's snow depth, his findings did not hold up.