Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Statistical hypothesis tests of some micrometeorological observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
SethuRaman, S.; Tichler, J.
Chi-square goodness-of-fit is used to test the hypothesis that the medium scale of turbulence in the atmospheric surface layer is normally distributed. Coefficients of skewness and excess are computed from the data. If the data are not normal, these coefficients are used in Edgeworth's asymptotic expansion of Gram-Charlier series to determine an altrnate probability density function. The observed data are then compared with the modified probability densities and the new chi-square values computed.Seventy percent of the data analyzed was either normal or approximatley normal. The coefficient of skewness g/sub 1/ has a good correlation with the chi-square values. Events withmore » vertical-barg/sub 1/vertical-bar<0.21 were normal to begin with and those with 0.21« less
Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F Landis
2014-01-01
This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.
Hypotheses to explain the origin of species in Amazonia.
Haffer, J
2008-11-01
The main hypotheses proposed to explain barrier formation separating populations and causing the differentiation of species in Amazonia during the course of geological history are based on different factors, as follow: (1) Changes in the distribution of land and sea or in the landscape due to tectonic movements or sea level fluctuations (Paleogeography hypothesis), (2) the barrier effect of Amazonian rivers (River hypothesis), (3) a combination of the barrier effect of broad rivers and vegetational changes in northern and southern Amazonia (River-refuge hypothesis), (4) the isolation of humid rainforest blocks near areas of surface relief in the periphery of Amazonia separated by dry forests, savannas and other intermediate vegetation types during dry climatic periods of the Tertiary and Quaternary (Refuge hypothesis), (5) changes in canopy-density due to climatic reversals (Canopy-density hypothesis) (6) the isolation and speciation of animal populations in small montane habitat pockets around Amazonia due to climatic fluctuations without major vegetational changes (Museum hypothesis), (7) competitive species interactions and local species isolations in peripheral regions of Amazonia due to invasion and counterinvasion during cold/warm periods of the Pleistocene (Disturbance-vicariance hypothesis) and (8) parapatric speciation across steep environmental gradients without separation of the respective populations (Gradient hypothesis). Several of these hypotheses probably are relevant to a different degree for the speciation processes in different faunal groups or during different geological periods. The basic paleogeography model refers mainly to faunal differentiation during the Tertiary and in combination with the Refuge hypothesis. Milankovitch cycles leading to global main hypotheses proposed to explain barrier formation separating populations and causing the differentiation of species in Amazonia during the course of geological history are based on different factors, as follow: (1) Changes in the distribution of land and sea or in the landscape due to tectonic movements or sea level fluctuations (Paleogeography hypothesis), (2) the barrier effect of Amazonian rivers (River hypothesis), (3) a combination of the barrier effect of broad rivers and vegetational changes in northern and southern Amazonia (River-refuge hypothesis), (4) the isolation of humid rainforest blocks near areas of surface relief in the periphery of Amazonia separated by dry forests, savannas and other intermediate vegetation types during dry climatic periods of the Tertiary and Quaternary (Refuge hypothesis), (5) changes in canopy-density due to climatic reversals (Canopy-density hypothesis) (6) the isolation and speciation of animal populations in small montane habitat pockets around Amazonia due to climatic fluctuations without major vegetational changes (Museum hypothesis), (7) competitive species interactions and local species isolations in peripheral regions of Amazonia due to invasion and counterinvasion during cold/warm periods of the Pleistocene (Disturbance-vicariance hypothesis) and (8) parapatric speciation across steep environmental gradients without separation of the respective populations (Gradient hypothesis). Several of these hypotheses probably are relevant to a different degree for the speciation processes in different faunal groups or during different geological periods. The basic paleogeography model refers mainly to faunal differentiation during the Tertiary and in combination with the Refuge hypothesis. Milankovitch cycles leading to global climatic-vegetational changes affected the biomes of the world not only during the Pleistocene but also during the Tertiary and earlier geological periods. New geoscientific evidence for the effect of dry climatic periods in Amazonia supports the predictions of the Refuge hypothesis. The disturbance-vicariance hypothesis refers to the presumed effect of cold/warm climatic phases of the Pleistocene only and is of limited general relevance because most extant species originated earlier and probably through paleogeographic changes and the formation of ecological refuges during the Tertiary.
Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words
Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.
2012-01-01
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers
Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin
2018-01-01
In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348
Density-dependent natal dispersal patterns in a leopard population recovering from over-harvest.
Fattebert, Julien; Balme, Guy; Dickerson, Tristan; Slotow, Rob; Hunter, Luke
2015-01-01
Natal dispersal enables population connectivity, gene flow and metapopulation dynamics. In polygynous mammals, dispersal is typically male-biased. Classically, the 'mate competition', 'resource competition' and 'resident fitness' hypotheses predict density-dependent dispersal patterns, while the 'inbreeding avoidance' hypothesis posits density-independent dispersal. In a leopard (Panthera pardus) population recovering from over-harvest, we investigated the effect of sex, population density and prey biomass, on age of natal dispersal, distance dispersed, probability of emigration and dispersal success. Over an 11-year period, we tracked 35 subadult leopards using VHF and GPS telemetry. Subadult leopards initiated dispersal at 13.6 ± 0.4 months. Age at commencement of dispersal was positively density-dependent. Although males (11.0 ± 2.5 km) generally dispersed further than females (2.7 ± 0.4 km), some males exhibited opportunistic philopatry when the population was below capacity. All 13 females were philopatric, while 12 of 22 males emigrated. Male dispersal distance and emigration probability followed a quadratic relationship with population density, whereas female dispersal distance was inversely density-dependent. Eight of 12 known-fate females and 5 of 12 known-fate male leopards were successful in settling. Dispersal success did not vary with population density, prey biomass, and for males, neither between dispersal strategies (philopatry vs. emigration). Females formed matrilineal kin clusters, supporting the resident fitness hypothesis. Conversely, mate competition appeared the main driver for male leopard dispersal. We demonstrate that dispersal patterns changed over time, i.e. as the leopard population density increased. We conclude that conservation interventions that facilitated local demographic recovery in the study area also restored dispersal patterns disrupted by unsustainable harvesting, and that this indirectly improved connectivity among leopard populations over a larger landscape.
Density-Dependent Natal Dispersal Patterns in a Leopard Population Recovering from Over-Harvest
Fattebert, Julien; Balme, Guy; Dickerson, Tristan; Slotow, Rob; Hunter, Luke
2015-01-01
Natal dispersal enables population connectivity, gene flow and metapopulation dynamics. In polygynous mammals, dispersal is typically male-biased. Classically, the ‘mate competition’, ‘resource competition’ and ‘resident fitness’ hypotheses predict density-dependent dispersal patterns, while the ‘inbreeding avoidance’ hypothesis posits density-independent dispersal. In a leopard (Panthera pardus) population recovering from over-harvest, we investigated the effect of sex, population density and prey biomass, on age of natal dispersal, distance dispersed, probability of emigration and dispersal success. Over an 11-year period, we tracked 35 subadult leopards using VHF and GPS telemetry. Subadult leopards initiated dispersal at 13.6 ± 0.4 months. Age at commencement of dispersal was positively density-dependent. Although males (11.0 ± 2.5 km) generally dispersed further than females (2.7 ± 0.4 km), some males exhibited opportunistic philopatry when the population was below capacity. All 13 females were philopatric, while 12 of 22 males emigrated. Male dispersal distance and emigration probability followed a quadratic relationship with population density, whereas female dispersal distance was inversely density-dependent. Eight of 12 known-fate females and 5 of 12 known-fate male leopards were successful in settling. Dispersal success did not vary with population density, prey biomass, and for males, neither between dispersal strategies (philopatry vs. emigration). Females formed matrilineal kin clusters, supporting the resident fitness hypothesis. Conversely, mate competition appeared the main driver for male leopard dispersal. We demonstrate that dispersal patterns changed over time, i.e. as the leopard population density increased. We conclude that conservation interventions that facilitated local demographic recovery in the study area also restored dispersal patterns disrupted by unsustainable harvesting, and that this indirectly improved connectivity among leopard populations over a larger landscape. PMID:25875293
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
Can the source–sink hypothesis explain macrofaunal abundance patterns in the abyss? A modelling test
Hardy, Sarah M.; Smith, Craig R.; Thurnherr, Andreas M.
2015-01-01
Low food availability is a major structuring force in deep-sea benthic communities, sustaining only very low densities of organisms in parts of the abyss. These low population densities may result in an Allee effect, whereby local reproductive success is inhibited, and populations are maintained by larval dispersal from bathyal slopes. This slope–abyss source–sink (SASS) hypothesis suggests that the abyssal seafloor constitutes a vast sink habitat with macrofaunal populations sustained only by an influx of larval ‘refugees' from source areas on continental slopes, where higher productivity sustains greater population densities. Abyssal macrofaunal population densities would thus be directly related to larval inputs from bathyal source populations. We evaluate three predictions derived from the SASS hypothesis: (i) slope-derived larvae can be passively transported to central abyssal regions within a single larval period, (ii) projected larval export from slopes to the abyss reproduces global patterns of macrofaunal abundance and (iii) macrofaunal abundance decreases with distance from the continental slope. We find that abyssal macrofaunal populations are unlikely to be sustained solely through influx of larvae from slope sources. Rather, local reproduction probably sustains macrofaunal populations in relatively high-productivity abyssal areas, which must also be considered as potential larval source areas for more food-poor abyssal regions. PMID:25948686
On the alleged collisional origin of the Kirkwood Gaps. [in asteroid belt
NASA Technical Reports Server (NTRS)
Heppenheimer, T. A.
1975-01-01
This paper examines two proposed mechanisms whereby asteroidal collisions and close approaches may have given rise to the Kirkwood Gaps. The first hypothesis is that asteroids in near-resonant orbits have markedly increased collision probabilities and so are preferentially destroyed, or suffer decay in population density, within the resonance zones. A simple order-of-magnitude analysis shows that this hypothesis is untenable since it leads to conclusions which are either unrealistic or not in accord with present understanding of asteroidal physics. The second hypothesis is the Brouwer-Jefferys theory that collisions would smooth an asteroidal distribution function, as a function of Jacobi constant, thus forming resonance gaps. This hypothesis is examined by direct numerical integration of 50 asteroid orbits near the 2:1 resonance, with collisions simulated by random variables. No tendency to form a gap was observed.
Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.
1985-01-01
Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
NASA Astrophysics Data System (ADS)
Koch, Wolfgang
1996-05-01
Sensor data processing in a dense target/dense clutter environment is inevitably confronted with data association conflicts which correspond with the multiple hypothesis character of many modern approaches (MHT: multiple hypothesis tracking). In this paper we analyze the efficiency of retrodictive techniques that generalize standard fixed interval smoothing to MHT applications. 'Delayed estimation' based on retrodiction provides uniquely interpretable and accurate trajectories from ambiguous MHT output if a certain time delay is tolerated. In a Bayesian framework the theoretical background of retrodiction and its intimate relation to Bayesian MHT is sketched. By a simulated example with two closely-spaced targets, relatively low detection probabilities, and rather high false return densities, we demonstrate the benefits of retrodiction and quantitatively discuss the achievable track accuracies and the time delays involved for typical radar parameters.
Autonomous detection of crowd anomalies in multiple-camera surveillance feeds
NASA Astrophysics Data System (ADS)
Nordlöf, Jonas; Andersson, Maria
2016-10-01
A novel approach for autonomous detection of anomalies in crowded environments is presented in this paper. The proposed models uses a Gaussian mixture probability hypothesis density (GM-PHD) filter as feature extractor in conjunction with different Gaussian mixture hidden Markov models (GM-HMMs). Results, based on both simulated and recorded data, indicate that this method can track and detect anomalies on-line in individual crowds through multiple camera feeds in a crowded environment.
The role of parasites in the dynamics of a reindeer population.
Albon, S D; Stien, A; Irvine, R J; Langvatn, R; Ropstad, E; Halvorsen, O
2002-01-01
Even though theoretical models show that parasites may regulate host population densities, few empirical studies have given support to this hypothesis. We present experimental and observational evidence for a host-parasite interaction where the parasite has sufficient impact on host population dynamics for regulation to occur. During a six year study of the Svalbard reindeer and its parasitic gastrointestinal nematode Ostertagia gruehneri we found that anthelminthic treatment in April-May increased the probability of a reindeer having a calf in the next year, compared with untreated controls. However, treatment did not influence the over-winter survival of the reindeer. The annual variation in the degree to which parasites depressed fecundity was positively related to the abundance of O. gruehneri infection the previous October, which in turn was related to host density two years earlier. In addition to the treatment effect, there was a strong negative effect of winter precipitation on the probability of female reindeer having a calf. A simple matrix model was parameterized using estimates from our experimental and observational data. This model shows that the parasite-mediated effect on fecundity was sufficient to regulate reindeer densities around observed host densities. PMID:12184833
Aberson, M J R; Bolam, S G; Hughes, R G
2016-04-15
Stable isotope analyses of the abundant infaunal polychaete Hediste diversicolor, recognised as an indicator of sewage pollution, support the hypothesis that nutrient enrichment promotes surface deposit feeding, over suspension feeding and predation. At sewage-polluted sites in three estuaries in SE England Hediste mainly consumed microphytobenthos, sediment organic matter and filamentous macroalgae Ulva spp. At cleaner sites Hediste relied more on suspension feeding and consumption of Spartina anglica. There were no consistent differences in Hediste densities between the polluted and cleaner sites, probably because of increased densities at the cleaner sites too, facilitated by the planting of Spartina and nitrogen enrichment there too, including from agricultural run-off. Increased nutrient enrichment and the artificial availability of Spartina have probably increased densities of, and deposit-feeding by, Hediste in the past half-century and contributed indirectly to saltmarsh losses, since deposit-feeding by Hediste has been implicated in recent saltmarsh erosion in SE England. Copyright © 2016 Elsevier Ltd. All rights reserved.
Probabilistic objective functions for sensor management
NASA Astrophysics Data System (ADS)
Mahler, Ronald P. S.; Zajic, Tim R.
2004-08-01
This paper continues the investigation of a foundational and yet potentially practical basis for control-theoretic sensor management, using a comprehensive, intuitive, system-level Bayesian paradigm based on finite-set statistics (FISST). In this paper we report our most recent progress, focusing on multistep look-ahead -- i.e., allocation of sensor resources throughout an entire future time-window. We determine future sensor states in the time-window using a "probabilistically natural" sensor management objective function, the posterior expected number of targets (PENT). This objective function is constructed using a new "maxi-PIMS" optimization strategy that hedges against unknowable future observation-collections. PENT is used in conjuction with approximate multitarget filters: the probability hypothesis density (PHD) filter or the multi-hypothesis correlator (MHC) filter.
Bayes factor and posterior probability: Complementary statistical evidence to p-value.
Lin, Ruitao; Yin, Guosheng
2015-09-01
As a convention, a p-value is often computed in hypothesis testing and compared with the nominal level of 0.05 to determine whether to reject the null hypothesis. Although the smaller the p-value, the more significant the statistical test, it is difficult to perceive the p-value in a probability scale and quantify it as the strength of the data against the null hypothesis. In contrast, the Bayesian posterior probability of the null hypothesis has an explicit interpretation of how strong the data support the null. We make a comparison of the p-value and the posterior probability by considering a recent clinical trial. The results show that even when we reject the null hypothesis, there is still a substantial probability (around 20%) that the null is true. Not only should we examine whether the data would have rarely occurred under the null hypothesis, but we also need to know whether the data would be rare under the alternative. As a result, the p-value only provides one side of the information, for which the Bayes factor and posterior probability may offer complementary evidence. Copyright © 2015 Elsevier Inc. All rights reserved.
Joint search and sensor management for geosynchronous satellites
NASA Astrophysics Data System (ADS)
Zatezalo, A.; El-Fallah, A.; Mahler, R.; Mehra, R. K.; Pham, K.
2008-04-01
Joint search and sensor management for space situational awareness presents daunting scientific and practical challenges as it requires a simultaneous search for new, and the catalog update of the current space objects. We demonstrate a new approach to joint search and sensor management by utilizing the Posterior Expected Number of Targets (PENT) as the objective function, an observation model for a space-based EO/IR sensor, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geosynchronous Satellites are presented.
Effects of Shoreline Dynamics on Saltmarsh Vegetation
Sharma, Shailesh; Goff, Joshua; Moody, Ryan M.; McDonald, Ashley; Byron, Dorothy; Heck, Kenneth L.; Powers, Sean P.; Ferraro, Carl; Cebrian, Just
2016-01-01
We evaluated the impact of shoreline dynamics on fringing vegetation density at mid- and low-marsh elevations at a high-energy site in the northern Gulf of Mexico. Particularly, we selected eight unprotected shoreline stretches (75 m each) at a historically eroding site and measured their inter-annual lateral movement rate using the DSAS method for three consecutive years. We observed high inter-annual variability of shoreline movement within the selected stretches. Specifically, shorelines retrograded (eroded) in year 1 and year 3, whereas, in year 2, shorelines advanced seaward. Despite shoreline advancement in year 2, an overall net erosion was recorded during the survey period. Additionally, vegetation density generally declined at both elevations during the survey period; however, probably due to their immediate proximity with lateral erosion agents (e.g., waves, currents), marsh grasses at low-elevation exhibited abrupt reduction in density, more so than grasses at mid elevation. Finally, contrary to our hypothesis, despite shoreline advancement, vegetation density did not increase correspondingly in year 2 probably due to a lag in response from biota. More studies in other coastal systems may advance our knowledge of marsh edge systems; however, we consider our results could be beneficial to resource managers in preparing protection plans for coastal wetlands against chronic stressors such as lateral erosion. PMID:27442515
Effects of Shoreline Dynamics on Saltmarsh Vegetation.
Sharma, Shailesh; Goff, Joshua; Moody, Ryan M; McDonald, Ashley; Byron, Dorothy; Heck, Kenneth L; Powers, Sean P; Ferraro, Carl; Cebrian, Just
2016-01-01
We evaluated the impact of shoreline dynamics on fringing vegetation density at mid- and low-marsh elevations at a high-energy site in the northern Gulf of Mexico. Particularly, we selected eight unprotected shoreline stretches (75 m each) at a historically eroding site and measured their inter-annual lateral movement rate using the DSAS method for three consecutive years. We observed high inter-annual variability of shoreline movement within the selected stretches. Specifically, shorelines retrograded (eroded) in year 1 and year 3, whereas, in year 2, shorelines advanced seaward. Despite shoreline advancement in year 2, an overall net erosion was recorded during the survey period. Additionally, vegetation density generally declined at both elevations during the survey period; however, probably due to their immediate proximity with lateral erosion agents (e.g., waves, currents), marsh grasses at low-elevation exhibited abrupt reduction in density, more so than grasses at mid elevation. Finally, contrary to our hypothesis, despite shoreline advancement, vegetation density did not increase correspondingly in year 2 probably due to a lag in response from biota. More studies in other coastal systems may advance our knowledge of marsh edge systems; however, we consider our results could be beneficial to resource managers in preparing protection plans for coastal wetlands against chronic stressors such as lateral erosion.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
The Determinants of Career Decisions of Air Force Pilots.
1981-05-01
Hypothesis tests comparing these two models will be presented in Chapter VI. Page 114 Prob[J]=fProb(k 1 >X1B, ... kj_ 1 >Xj_1 B, kj <XjB] h(a) da 4.6 * Prob[S...Prob[k >X B, kp > X B] h(a) da 4.5a where h(a) is the marginal density of a . Substituting Equation 4.3, which gave the probability of leaving in...be zero. The model derived in this thesis for the individual decision to separate was based upor individual characteristics and macroeconomic 4
McLeod, David V; Wild, Geoff
2013-11-01
Cooperative breeding is a system in which certain individuals facilitate the production of offspring by others. The ecological constraints hypothesis states that ecological conditions deter individuals from breeding independently, and so individuals breed cooperatively to make the best of a bad situation. Current theoretical support for the ecological constraints hypothesis is lacking. We formulate a mathematical model that emphasizes the underlying ecology of cooperative breeders. Our goal is to derive theoretical support for the ecological constraints hypothesis using an ecological model of population dynamics. We consider a population composed of two kinds of individuals, nonbreeders (auxiliaries) and breeders. We suppose that help provided by an auxiliary increases breeder fecundity, but reduces the probability with which the auxiliary becomes a breeder. Our main result is a condition that guarantees success of auxiliary help. We predict that increasing the cost of dispersal promotes helping, in agreement with verbal theory. We also predict that increasing breeder mortality can either hinder helping (at high population densities), or promote it (at low population densities). We conclude that ecological constraints can exert influence over the evolution of auxiliary help when population dynamics are considered; moreover, that influence need not coincide with direct fitness benefits as previously found. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
To P or Not to P: Backing Bayesian Statistics.
Buchinsky, Farrel J; Chadha, Neil K
2017-12-01
In biomedical research, it is imperative to differentiate chance variation from truth before we generalize what we see in a sample of subjects to the wider population. For decades, we have relied on null hypothesis significance testing, where we calculate P values for our data to decide whether to reject a null hypothesis. This methodology is subject to substantial misinterpretation and errant conclusions. Instead of working backward by calculating the probability of our data if the null hypothesis were true, Bayesian statistics allow us instead to work forward, calculating the probability of our hypothesis given the available data. This methodology gives us a mathematical means of incorporating our "prior probabilities" from previous study data (if any) to produce new "posterior probabilities." Bayesian statistics tell us how confidently we should believe what we believe. It is time to embrace and encourage their use in our otolaryngology research.
Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner
2009-06-01
Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.
Global tracking of space debris via CPHD and consensus
NASA Astrophysics Data System (ADS)
Wei, Baishen; Nener, Brett; Liu, Weifeng; Ma, Liang
2017-05-01
Space debris tracking is of great importance for safe operation of spacecraft. This paper presents an algorithm that achieves global tracking of space debris with a multi-sensor network. The sensor network has unknown and possibly time-varying topology. A consensus algorithm is used to effectively counteract the effects of data incest. Gaussian Mixture-Cardinalized Probability Hypothesis Density (GM-CPHD) filtering is used to estimate the state of the space debris. As an example of the method, 45 clusters of sensors are used to achieve global tracking. The performance of the proposed approach is demonstrated by simulation experiments.
Effect of disposable infection control barriers on light output from dental curing lights.
Scott, Barbara A; Felix, Corey A; Price, Richard B T
2004-02-01
To prevent contamination of the light guide on a dental curing light, barriers such as disposable plastic wrap or covers may be used. This study compared the effect of 3 disposable barriers on the spectral output and power density from a curing light. The hypothesis was that none of the barriers would have a significant clinical effect on the spectral output or the power density from the curing light. Three disposable barriers were tested against a control (no barrier). The spectra and power from the curing light were measured with a spectrometer attached to an integrating sphere. The measurements were repeated on 10 separate occasions in a random sequence for each barrier. Analysis of variance (ANOVA) followed by Fisher's protected least significant difference test showed that the power density was significantly less than control (by 2.4% to 6.1%) when 2 commercially available disposable barriers were used (p < 0.05). There was no significant difference in the power density when general-purpose plastic wrap was used (p > 0.05). The effect of each of the barriers on the power output was small and probably clinically insignificant. ANOVA comparisons of mean peak wavelength values indicated that none of the barriers produced a significant shift in the spectral output relative to the control ( p > 0.05). Two of the 3 disposable barriers produced a significant reduction in power density from the curing light. This drop in power was small and would probably not adversely affect the curing of composite resin. None of the barriers acted as light filters.
Multi-species genetic connectivity in a terrestrial habitat network.
Marrotte, Robby R; Bowman, Jeff; Brown, Michael G C; Cordes, Chad; Morris, Kimberley Y; Prentice, Melanie B; Wilson, Paul J
2017-01-01
Habitat fragmentation reduces genetic connectivity for multiple species, yet conservation efforts tend to rely heavily on single-species connectivity estimates to inform land-use planning. Such conservation activities may benefit from multi-species connectivity estimates, which provide a simple and practical means to mitigate the effects of habitat fragmentation for a larger number of species. To test the validity of a multi-species connectivity model, we used neutral microsatellite genetic datasets of Canada lynx ( Lynx canadensis ), American marten ( Martes americana ), fisher ( Pekania pennanti ), and southern flying squirrel ( Glaucomys volans ) to evaluate multi-species genetic connectivity across Ontario, Canada. We used linear models to compare node-based estimates of genetic connectivity for each species to point-based estimates of landscape connectivity (current density) derived from circuit theory. To our knowledge, we are the first to evaluate current density as a measure of genetic connectivity. Our results depended on landscape context: habitat amount was more important than current density in explaining multi-species genetic connectivity in the northern part of our study area, where habitat was abundant and fragmentation was low. In the south however, where fragmentation was prevalent, genetic connectivity was correlated with current density. Contrary to our expectations however, locations with a high probability of movement as reflected by high current density were negatively associated with gene flow. Subsequent analyses of circuit theory outputs showed that high current density was also associated with high effective resistance, underscoring that the presence of pinch points is not necessarily indicative of gene flow. Overall, our study appears to provide support for the hypothesis that landscape pattern is important when habitat amount is low. We also conclude that while current density is proportional to the probability of movement per unit area, this does not imply increased gene flow, since high current density tends to be a result of neighbouring pixels with high cost of movement (e.g., low habitat amount). In other words, pinch points with high current density appear to constrict gene flow.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
P value and the theory of hypothesis testing: an explanation for new researchers.
Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël
2010-03-01
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Are there optimal densities for prairie birds?
Skagen, S.K.; Adams, A.A.Y.
2010-01-01
The major forces of food and predation shape fitness-enhancing decisions of birds at all stages of their life cycles. During the breeding season, birds can minimize nest loss due to predation by selecting sites with a lower probability of predation. To understand the environmental and social aspects and consequences of breedingsite selection in prairie birds, we explored variation in nest-survival patterns of the Lark Bunting (Calamospiza melanocorys) in the shortgrass prairie region of North America. Over four breeding seasons, we documented the survival of 405 nests, conducted 60 surveys to estimate bird densities, and measured several vegetative features to describe habitat structure in 24 randomly selected study plots. Nest survival varied with the buntings' density as described by a quadratic polynomial, increasing with density below 1.5 birds ha-1 and decreasing with density between 1.5 and 3 birds ha-1, suggesting that an optimal range of densities favors reproductive success of the Lark Bunting, which nests semi-colonially. Nest survival also increased with increasing vegetation structure of study plots and varied with age of the nest, increasing during early incubation and late in the nestling stage and declining slightly from mid-incubation to the middle of the nestling period. The existence of an optimal range of densities in this semi-colonial species can be elucidated by the "commodity-selection hypothesis" at low densities and density dependence at high densities. ?? The Cooper Ornithological Society 2010.
Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan Miguel
2013-01-01
Research biobanks are often composed by data from multiple sources. In some cases, these different subsets of data may present dissimilarities among their probability density functions (PDF) due to spatial shifts. This, may lead to wrong hypothesis when treating the data as a whole. Also, the overall quality of the data is diminished. With the purpose of developing a generic and comparable metric to assess the stability of multi-source datasets, we have studied the applicability and behaviour of several PDF distances over shifts on different conditions (such as uni- and multivariate, different types of variable, and multi-modality) which may appear in real biomedical data. From the studied distances, we found information-theoretic based and Earth Mover's Distance to be the most practical distances for most conditions. We discuss the properties and usefulness of each distance according to the possible requirements of a general stability metric.
Baeza, J Antonio
2013-10-01
The 'Tomlinson-Ghiselin' hypothesis (TGh) predicts that outcrossing simultaneous hermaphroditism (SH) is advantageous when population density is low because the probability of finding sexual partners is negligible. In shrimps from the family Lysmatidae, Bauer's historical contingency hypothesis (HCh) suggests that SH evolved in an ancestral tropical species that adopted a symbiotic lifestyle with, e.g., sea anemones and became a specialized fish-cleaner. Restricted mobility of shrimps due to their association with a host, and hence, reduced probability of encountering mating partners, would have favored SH. The HCh is a special case of the TGh. Herein, I examined within a phylogenetic framework whether the TGh/HCh explains the origin of SH in shrimps. A phylogeny of caridean broken-back shrimps in the families Lysmatidae, Barbouriidae, Merguiidae was first developed using nuclear and mitochondrial makers. Complete evidence phylogenetic analyses using maximum likelihood (ML) and Bayesian inference (BI) demonstrated that Lysmatidae+Barbouriidae are monophyletic. In turn, Merguiidae is sister to the Lysmatidae+Barbouriidae. ML and BI ancestral character-state reconstruction in the resulting phylogenetic trees indicated that the ancestral Lysmatidae was either gregarious or lived in small groups and was not symbiotic. Four different evolutionary transitions from a free-living to a symbiotic lifestyle occurred in shrimps. Therefore, the evolution of SH in shrimps cannot be explained by the TGh/HCh; reduced probability of encountering mating partners in an ancestral species due to its association with a sessile host did not favor SH in the Lysmatidae. It is proposed that two conditions acting together in the past; low male mating opportunities and brooding constraints, might have favored SH in the ancestral Lysmatidae+Barbouridae. Additional studies on the life history and phylogenetics of broken-back shrimps are needed to understand the evolution of SH in the ecologically diverse Caridea. Copyright © 2013 Elsevier Inc. All rights reserved.
Universal laws of human society's income distribution
NASA Astrophysics Data System (ADS)
Tao, Yong
2015-10-01
General equilibrium equations in economics play the same role with many-body Newtonian equations in physics. Accordingly, each solution of the general equilibrium equations can be regarded as a possible microstate of the economic system. Since Arrow's Impossibility Theorem and Rawls' principle of social fairness will provide a powerful support for the hypothesis of equal probability, then the principle of maximum entropy is available in a just and equilibrium economy so that an income distribution will occur spontaneously (with the largest probability). Remarkably, some scholars have observed such an income distribution in some democratic countries, e.g. USA. This result implies that the hypothesis of equal probability may be only suitable for some "fair" systems (economic or physical systems). From this meaning, the non-equilibrium systems may be "unfair" so that the hypothesis of equal probability is unavailable.
Seasonal fecundity is not related to geographic position ...
AimSixty-five years ago, Theodosius Dobzhansky suggested that individuals of a species face greater challenges from abiotic stressors at high latitudes and from biotic stressors at their low-latitude range edges. This idea has been expanded to the hypothesis that species’ ranges are limited by abiotic and biotic stressors at high and low latitudes, respectively. Support has been found in many systems, but this hypothesis has almost never been tested with demographic data. We present an analysis of fecundity across the breeding range of a species as a test of this hypothesis.Location575 km of tidal marshes in the northeastern United States.MethodsWe monitored saltmarsh sparrow (Ammodramus caudacutus) nests at twenty-three sites from Maine to New Jersey, USA. With data from 840 nests, we calculated daily nest failure probabilities due to competing abiotic (flooding) and biotic (depredation) stressors.ResultsWe observed that abiotic stress (nest flooding probability) was greater than biotic stress (nest depredation probability) at the high-latitude range edge of saltmarsh sparrows, consistent with Dobzhansky’s hypothesis. Similarly, biotic stress decreased with increasing latitude throughout the range, whereas abiotic stress was not predicted by latitude alone. Instead, nest flooding probability was best predicted by date, maximum high tide, and extremity of rare flooding events.Main conclusionsOur results provide support for Dobzhansky’s hypothesis across th
Space-based sensor management and geostationary satellites tracking
NASA Astrophysics Data System (ADS)
El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.
2007-04-01
Sensor management for space situational awareness presents a daunting theoretical and practical challenge as it requires the use of multiple types of sensors on a variety of platforms to ensure that the space environment is continuously monitored. We demonstrate a new approach utilizing the Posterior Expected Number of Targets (PENT) as the sensor management objective function, an observation model for a space-based EO/IR sensor platform, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geostationary Satellites are presented. We also demonstrate enhanced performance by applying the ProgressiveWeighting Correction (PWC) method for regularization in the implementation of the PHD-PF tracker.
Tentori, Katya; Chater, Nick; Crupi, Vincenzo
2016-04-01
Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. Copyright © 2015 Cognitive Science Society, Inc.
Busin, Massimo; Madi, Silvana; Scorcia, Vincenzo; Santorum, Paolo; Nahum, Yoav
2015-01-01
Purpose: To test the hypothesis that a new microkeratome-assisted penetrating keratoplasty (PK) technique employing transplantation of a two-piece mushroom-shaped graft may result in better visual outcomes and graft survival rates than those of conventional PK. Methods: Retrospective chart review of 96 eyes at low risk and 76 eyes at high risk for immunologic rejection (all with full-thickness central corneal opacity and otherwise healthy endothelium) undergoing mushroom PK between 2004 and 2012 at our Institution. Outcome measures were best-corrected visual acuity (BCVA), refraction, corneal topography, endothelial cell density, graft rejection, and survival probability. Results: Five years postoperatively, BCVA of 20/40 and 20/20 was recorded in 100% and over 50% of eyes, respectively. Mean spherical equivalent of refractive error did not vary significantly over a 5-year period; astigmatism averaged always below 4 diopters, with no statistically significant change over time, and was of the regular type in over 90% of eyes. Endothelial cell density decreased to about 40% of the eye bank count 2 years after mushroom PK and did not change significantly thereafter. Five years postoperatively, probabilities of graft immunologic rejection and graft survival were below 5% and above 95%, respectively. There was no statistically significant difference in endothelial cell loss, graft rejection, and survival probability between low-risk and high-risk subgroups. Conclusions: Refractive and visual outcomes of mushroom PK compare favorably with those of conventional full-thickness keratoplasty. In eyes at high risk for immunologic rejection, mushroom PK provides a considerably higher probability of graft survival than conventional PK. PMID:26538771
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
ERIC Educational Resources Information Center
Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.
2010-01-01
Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…
A test of the substitution-habitat hypothesis in amphibians.
Martínez-Abraín, Alejandro; Galán, Pedro
2018-06-01
Most examples that support the substitution-habitat hypothesis (human-made habitats act as substitutes of original habitat) deal with birds and mammals. We tested this hypothesis in 14 amphibians by using percentage occupancy as a proxy of habitat quality (i.e., higher occupancy percentages indicate higher quality). We classified water body types as original habitat (no or little human influence) depending on anatomical, behavioral, or physiological adaptations of each amphibian species. Ten species had relatively high probabilities (0.16-0.28) of occurrence in original habitat, moderate probability of occurrence in substitution habitats (0.11-0.14), and low probability of occurrence in refuge habitats (0.05-0.08). Thus, the substitution-habitat hypothesis only partially applies to amphibians because the low occupancy of refuges could be due to the negligible human persecution of this group (indicating good conservation status). However, low occupancy of refuges could also be due to low tolerance of refuge conditions, which could have led to selective extinction or colonization problems due to poor dispersal capabilities. That original habitats had the highest probabilities of occupancy suggests amphibians have a good conservation status in the region. They also appeared highly adaptable to anthropogenic substitution habitats. © 2017 Society for Conservation Biology.
Investigating prior probabilities in a multiple hypothesis test for use in space domain awareness
NASA Astrophysics Data System (ADS)
Hardy, Tyler J.; Cain, Stephen C.
2016-05-01
The goal of this research effort is to improve Space Domain Awareness (SDA) capabilities of current telescope systems through improved detection algorithms. Ground-based optical SDA telescopes are often spatially under-sampled, or aliased. This fact negatively impacts the detection performance of traditionally proposed binary and correlation-based detection algorithms. A Multiple Hypothesis Test (MHT) algorithm has been previously developed to mitigate the effects of spatial aliasing. This is done by testing potential Resident Space Objects (RSOs) against several sub-pixel shifted Point Spread Functions (PSFs). A MHT has been shown to increase detection performance for the same false alarm rate. In this paper, the assumption of a priori probability used in a MHT algorithm is investigated. First, an analysis of the pixel decision space is completed to determine alternate hypothesis prior probabilities. These probabilities are then implemented into a MHT algorithm, and the algorithm is then tested against previous MHT algorithms using simulated RSO data. Results are reported with Receiver Operating Characteristic (ROC) curves and probability of detection, Pd, analysis.
Fisher information framework for time series modeling
NASA Astrophysics Data System (ADS)
Venkatesan, R. C.; Plastino, A.
2017-08-01
A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.
Influence of care of domestic carnivores on their predation on vertebrates.
Silva-Rodríguez, Eduardo A; Sieving, Kathryn E
2011-08-01
Domestic dogs (Canis familiaris) and cats (Felis catus) are the most abundant mammalian carnivores worldwide. Given that domestic carnivores rely on human-provided food, their densities are usually independent of prey densities. Nevertheless, underfed pets may need to hunt to meet their energetic and nutritional requirements. We explored the effects of different levels of care (provision of food) of dogs and cats on their predation rates on wild vertebrates in 2 areas of southern Chile. We interviewed cat and dog owners and analyzed prey remains in scats of pets to examine how domestic dogs and cats were managed and to gather information on the wild vertebrates killed and harassed by pets. We used logistic regression to examine the association between pet care and the frequency of wild vertebrate remains in scats. The probability of a dog preying on vertebrates was higher for poorly fed than for adequately fed dogs (odds ratio = 3.7) and for poorly fed than for adequately fed cats (odds ratio = 4.7). Domestic dogs and cats preyed on most endemic and threatened mammals present in the study sites. Our results provide support for the hypothesis that the less care domestic animals receive from owners the higher the probability those animals will prey on wild vertebrates. © 2011 Society for Conservation Biology.
Bacakova, Marketa; Lopot, Frantisek; Hadraba, Daniel; Varga, Marian; Zaloudkova, Margit; Stranska, Denisa; Suchy, Tomas; Bacakova, Lucie
2015-01-01
It may be possible to regulate the cell colonization of biodegradable polymer nanofibrous membranes by plasma treatment and by the density of the fibers. To test this hypothesis, nanofibrous membranes of different fiber densities were treated by oxygen plasma with a range of plasma power and exposure times. Scanning electron microscopy and mechanical tests showed significant modification of nanofibers after plasma treatment. The intensity of the fiber modification increased with plasma power and exposure time. The exposure time seemed to have a stronger effect on modifying the fiber. The mechanical behavior of the membranes was influenced by the plasma treatment, the fiber density, and their dry or wet state. Plasma treatment increased the membrane stiffness; however, the membranes became more brittle. Wet membranes displayed significantly lower stiffness than dry membranes. X-ray photoelectron spectroscopy (XPS) analysis showed a slight increase in oxygen-containing groups on the membrane surface after plasma treatment. Plasma treatment enhanced the adhesion and growth of HaCaT keratinocytes on nanofibrous membranes. The cells adhered and grew preferentially on membranes of lower fiber densities, probably due to the larger area of void spaces between the fibers. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Sun, Hao; Zhou, Chi; Huang, Xiaoqin; Lin, Keqin; Shi, Lei; Yu, Liang; Liu, Shuyuan; Chu, Jiayou; Yang, Zhaoqing
2013-01-01
Tai people are widely distributed in Thailand, Laos and southwestern China and are a large population of Southeast Asia. Although most anthropologists and historians agree that modern Tai people are from southwestern China and northern Thailand, the place from which they historically migrated remains controversial. Three popular hypotheses have been proposed: northern origin hypothesis, southern origin hypothesis or an indigenous origin. We compared the genetic relationships between the Tai in China and their "siblings" to test different hypotheses by analyzing 10 autosomal microsatellites. The genetic data of 916 samples from 19 populations were analyzed in this survey. The autosomal STR data from 15 of the 19 populations came from our previous study (Lin et al., 2010). 194 samples from four additional populations were genotyped in this study: Han (Yunnan), Dai (Dehong), Dai (Yuxi) and Mongolian. The results of genetic distance comparisons, genetic structure analyses and admixture analyses all indicate that populations from northern origin hypothesis have large genetic distances and are clearly differentiated from the Tai. The simulation-based ABC analysis also indicates this. The posterior probability of the northern origin hypothesis is just 0.04 [95%CI: (0.01-0.06)]. Conversely, genetic relationships were very close between the Tai and populations from southern origin or an indigenous origin hypothesis. Simulation-based ABC analyses were also used to distinguish the southern origin hypothesis from the indigenous origin hypothesis. The results indicate that the posterior probability of the southern origin hypothesis [0.640, 95%CI: (0.524-0.757)] is greater than that of the indigenous origin hypothesis [0.324, 95%CI: (0.211-0.438)]. Therefore, we propose that the genetic evidence does not support the hypothesis of northern origin. Our genetic data indicate that the southern origin hypothesis has higher probability than the other two hypotheses statistically, suggesting that the Tai people most likely originated from southern China.
Using SN 1987A light echoes to determine mass loss from the progenitor
NASA Technical Reports Server (NTRS)
Crotts, Arlin P. S.; Kunkel, William E.
1991-01-01
The hypothesis that the blue progenitor of SN 1987A passed through a blue supergiant phase ending with the expulsion of the outer envelope is tested. The many light echoes seen near SN 1987A were used to search for a mass flow from the progenitor and for abrupt density changes at the limits of this smooth mass flow. The progenitor needed roughly a million yr to create these structures, assuming a constant mass loss at 15 km/s. The dust in the region is small-grained and isotropically scattering. Interaction between the progenitor blue supergiant and red supergiant winds is probably contained within a roughly spherical structure 1.5 pc in diameter.
Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A
2006-11-01
A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less
Nallar, Rodolfo; Papp, Zsuzsanna; Leighton, Frederick A; Epp, Tasha; Pasick, John; Berhane, Yohannes; Lindsay, Robbin; Soos, Catherine
2016-01-01
The Canadian prairies are one of the most important breeding and staging areas for migratory waterfowl in North America. Hundreds of thousands of waterfowl of numerous species from multiple flyways converge in and disperse from this region annually; therefore this region may be a key area for potential intra- and interspecific spread of infectious pathogens among migratory waterfowl in the Americas. Using Blue-winged Teal (Anas discors, BWTE), which have the most extensive migratory range among waterfowl species, we investigated ecologic risk factors for infection and antibody status to avian influenza virus (AIV), West Nile virus (WNV), and avian paramyxovirus-1 (APMV-1) in the three prairie provinces (Alberta, Saskatchewan, and Manitoba) prior to fall migration. We used generalized linear models to examine infection or evidence of exposure in relation to host (age, sex, body condition, exposure to other infections), spatiotemporal (year, province), population-level (local population densities of BWTE, total waterfowl densities), and environmental (local pond densities) factors. The probability of AIV infection in BWTE was associated with host factors (e.g., age and antibody status), population-level factors (e.g., local BWTE population density), and year. An interaction between age and AIV antibody status showed that hatch year birds with antibodies to AIV were more likely to be infected, suggesting an antibody response to an active infection. Infection with AIV was positively associated with local BWTE density, supporting the hypothesis of density-dependent transmission. The presence of antibodies to WNV and APMV-1 was positively associated with age and varied among years. Furthermore, the probability of being WNV antibody positive was positively associated with pond density rather than host population density, likely because ponds provide suitable breeding habitat for mosquitoes, the primary vectors for transmission. Our findings highlight the importance of spatiotemporal, environmental, and host factors at the individual and population levels, all of which may influence dynamics of these and other viruses in wild waterfowl populations.
Individual-Area Relationship Best Explains Goose Species Density in Wetlands
Prins, Herbert H. T.; Cao, Lei; de Boer, Willem Fred
2015-01-01
Explaining and predicting animal distributions is one of the fundamental objectives in ecology and conservation biology. Animal habitat selection can be regulated by top-down and bottom-up processes, and is mediated by species interactions. Species varying in body size respond differently to top-down and bottom-up determinants, and hence understanding these allometric responses to those determinants is important for conservation. In this study, using two differently sized goose species wintering in the Yangtze floodplain, we tested the predictions derived from three different hypotheses (individual-area relationship, food resource and disturbance hypothesis) to explain the spatial and temporal variation in densities of two goose species. Using Generalized Linear Mixed Models with a Markov Chain Monte Carlo technique, we demonstrated that goose density was positive correlated with patch area size, suggesting that the individual area-relationship best predicts differences in goose densities. Moreover, the other predictions, related to food availability and disturbance, were not significant. Buffalo grazing probably facilitated greater white-fronted geese, as the number of buffalos was positively correlated to the density of this species. We concluded that patch area size is the most important factor determining the density of goose species in our study area. Patch area size is directly determined by water levels in the Yangtze floodplain, and hence modifying the hydrological regimes can enlarge the capacity of these wetlands for migratory birds. PMID:25996502
Spatial and temporal patterns of coexistence between competing Aedes mosquitoes in urban Florida
Juliano, S. A.
2009-01-01
Understanding mechanisms fostering coexistence between invasive and resident species is important in predicting ecological, economic, or health impacts of invasive species. The mosquito Aedes aegypti coexists at some urban sites in southeastern United States with invasive Aedes albopictus, which is often superior in interspecific competition. We tested predictions for three hypotheses of species coexistence: seasonal condition-specific competition, aggregation among individual water-filled containers, and colonization–competition tradeoff across spatially partitioned habitat patches (cemeteries) that have high densities of containers. We measured spatial and temporal patterns of abundance for both species among water-filled resident cemetery vases and experimentally positioned standard cemetery vases and ovitraps in metropolitan Tampa, Florida. Consistent with the seasonal condition-specific competition hypothesis, abundances of both species in resident and standard cemetery vases were higher early in the wet season (June) versus late in the wet season (September), but the proportional increase of A. albopictus was greater than that of A. aegypti, presumably due to higher dry-season egg mortality and strong wet-season competitive superiority of larval A. albopictus. Spatial partitioning was not evident among cemeteries, a result inconsistent with the colonization-competition tradeoff hypothesis, but both species were highly independently aggregated among standard cemetery vases and ovitraps, which is consistent with the aggregation hypothesis. Densities of A. aegypti but not A. albopictus differed among land use categories, with A. aegypti more abundant in ovitraps in residential areas compared to industrial and commercial areas. Spatial partitioning among land use types probably results from effects of land use on conditions in both terrestrial and aquatic-container environments. These results suggest that both temporal and spatial variation may contribute to local coexistence between these Aedes in urban areas. PMID:19263086
Spatial and temporal patterns of coexistence between competing Aedes mosquitoes in urban Florida.
Leisnham, Paul T; Juliano, S A
2009-05-01
Understanding mechanisms fostering coexistence between invasive and resident species is important in predicting ecological, economic, or health impacts of invasive species. The mosquito Aedes aegypti coexists at some urban sites in southeastern United States with invasive Aedes albopictus, which is often superior in interspecific competition. We tested predictions for three hypotheses of species coexistence: seasonal condition-specific competition, aggregation among individual water-filled containers, and colonization-competition tradeoff across spatially partitioned habitat patches (cemeteries) that have high densities of containers. We measured spatial and temporal patterns of abundance for both species among water-filled resident cemetery vases and experimentally positioned standard cemetery vases and ovitraps in metropolitan Tampa, Florida. Consistent with the seasonal condition-specific competition hypothesis, abundances of both species in resident and standard cemetery vases were higher early in the wet season (June) versus late in the wet season (September), but the proportional increase of A. albopictus was greater than that of A. aegypti, presumably due to higher dry-season egg mortality and strong wet-season competitive superiority of larval A. albopictus. Spatial partitioning was not evident among cemeteries, a result inconsistent with the colonization-competition tradeoff hypothesis, but both species were highly independently aggregated among standard cemetery vases and ovitraps, which is consistent with the aggregation hypothesis. Densities of A. aegypti but not A. albopictus differed among land use categories, with A. aegypti more abundant in ovitraps in residential areas compared to industrial and commercial areas. Spatial partitioning among land use types probably results from effects of land use on conditions in both terrestrial and aquatic-container environments. These results suggest that both temporal and spatial variation may contribute to local coexistence between these Aedes in urban areas.
Students' Understanding of Conditional Probability on Entering University
ERIC Educational Resources Information Center
Reaburn, Robyn
2013-01-01
An understanding of conditional probability is essential for students of inferential statistics as it is used in Null Hypothesis Tests. Conditional probability is also used in Bayes' theorem, in the interpretation of medical screening tests and in quality control procedures. This study examines the understanding of conditional probability of…
NASA Astrophysics Data System (ADS)
Barth, H.
An hypothesis is presented concerning the crucial influence of tides on the evolutionary transition from aquatic to land animal forms. The hypothesis suggests that the evolution of higher forms of life on a planet also depends on the existence of a planet-moon system in which the mass ratio of both constituents must be approximately equal to that of the earth-moon system, which is 81:1. The hypothesis is taken into account in the form of the probability factor fb in Drake's formula for estimating the presumed extraterrestrial civilizations in Milky Way which may conceivably make contact.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Low probability of a dilution effect for Lyme borreliosis in Belgian forests.
Ruyts, Sanne C; Landuyt, Dries; Ampoorter, Evy; Heylen, Dieter; Ehrmann, Steffen; Coipan, Elena C; Matthysen, Erik; Sprong, Hein; Verheyen, Kris
2018-04-22
An increasing number of studies have investigated the consequences of biodiversity loss for the occurrence of vector-borne diseases such as Lyme borreliosis, the most common tick-borne disease in the northern hemisphere. As host species differ in their ability to transmit the Lyme borreliosis bacteria Borrelia burgdorferi s.l. to ticks, increased host diversity can decrease disease prevalence by increasing the proportion of dilution hosts, host species that transmit pathogens less efficiently. Previous research shows that Lyme borreliosis risk differs between forest types and suggests that a higher diversity of host species might dilute the contribution of small rodents to infect ticks with B. afzelii, a common Borrelia genospecies. However, empirical evidence for a dilution effect in Europe is largely lacking. We tested the dilution effect hypothesis in 19 Belgian forest stands of different forest types along a diversity gradient. We used empirical data and a Bayesian belief network to investigate the impact of the proportion of dilution hosts on the density of ticks infected with B. afzelii, and identified the key drivers determining the density of infected ticks, which is a measure of human infection risk. Densities of ticks and B. afzelii infection prevalence differed between forest types, but the model indicated that the density of infected ticks is hardly affected by dilution. The most important variables explaining variability in disease risk were related to the density of ticks. Combining empirical data with a model-based approach supported decision making to reduce tick-borne disease risk. We found a low probability of a dilution effect for Lyme borreliosis in a north-western European context. We emphasize that under these circumstances, Lyme borreliosis prevention should rather aim at reducing tick-human contact rate instead of attempting to increase the proportion of dilution hosts. Copyright © 2018. Published by Elsevier GmbH.
Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis
NASA Astrophysics Data System (ADS)
Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.
As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Dynamic sensor management of dispersed and disparate sensors for tracking resident space objects
NASA Astrophysics Data System (ADS)
El-Fallah, A.; Zatezalo, A.; Mahler, R.; Mehra, R. K.; Donatelli, D.
2008-04-01
Dynamic sensor management of dispersed and disparate sensors for space situational awareness presents daunting scientific and practical challenges as it requires optimal and accurate maintenance of all Resident Space Objects (RSOs) of interest. We demonstrate an approach to the space-based sensor management problem by extending a previously developed and tested sensor management objective function, the Posterior Expected Number of Targets (PENT), to disparate and dispersed sensors. This PENT extension together with observation models for various sensor platforms, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker provide a powerful tool for tackling this challenging problem. We demonstrate the approach using simulations for tracking RSOs by a Space Based Visible (SBV) sensor and ground based radars.
Islam, Nazmul; Ghosh, Dulal C
2012-01-01
Electrophilicity is an intrinsic property of atoms and molecules. It probably originates logistically with the involvement in the physical process of electrostatics of soaked charge in electronic shells and the screened nuclear charge of atoms. Motivated by the existing view of conceptual density functional theory that similar to electronegativity and hardness equalization, there should be a physical process of equalization of electrophilicity during the chemical process of formation of hetero nuclear molecules, we have developed a new theoretical scheme and formula for evaluating the electrophilicity of hetero nuclear molecules. A comparative study with available bench marking reveals that the hypothesis of electrophilicity and equalization, and the present method of evaluating equalized electrophilicity, are scientifically promising.
Islam, Nazmul; Ghosh, Dulal C.
2012-01-01
Electrophilicity is an intrinsic property of atoms and molecules. It probably originates logistically with the involvement in the physical process of electrostatics of soaked charge in electronic shells and the screened nuclear charge of atoms. Motivated by the existing view of conceptual density functional theory that similar to electronegativity and hardness equalization, there should be a physical process of equalization of electrophilicity during the chemical process of formation of hetero nuclear molecules, we have developed a new theoretical scheme and formula for evaluating the electrophilicity of hetero nuclear molecules. A comparative study with available bench marking reveals that the hypothesis of electrophilicity and equalization, and the present method of evaluating equalized electrophilicity, are scientifically promising. PMID:22408445
Time difference of arrival estimation of microseismic signals based on alpha-stable distribution
NASA Astrophysics Data System (ADS)
Jia, Rui-Sheng; Gong, Yue; Peng, Yan-Jun; Sun, Hong-Mei; Zhang, Xing-Li; Lu, Xin-Ming
2018-05-01
Microseismic signals are generally considered to follow the Gauss distribution. A comparison of the dynamic characteristics of sample variance and the symmetry of microseismic signals with the signals which follow α-stable distribution reveals that the microseismic signals have obvious pulse characteristics and that the probability density curve of the microseismic signal is approximately symmetric. Thus, the hypothesis that microseismic signals follow the symmetric α-stable distribution is proposed. On the premise of this hypothesis, the characteristic exponent α of the microseismic signals is obtained by utilizing the fractional low-order statistics, and then a new method of time difference of arrival (TDOA) estimation of microseismic signals based on fractional low-order covariance (FLOC) is proposed. Upon applying this method to the TDOA estimation of Ricker wavelet simulation signals and real microseismic signals, experimental results show that the FLOC method, which is based on the assumption of the symmetric α-stable distribution, leads to enhanced spatial resolution of the TDOA estimation relative to the generalized cross correlation (GCC) method, which is based on the assumption of the Gaussian distribution.
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
Effect of accelerated aging on the cross-link density of medical grade silicones.
Mahomed, Aziza; Pormehr, Negin Bagheri
2016-11-25
Four specimens of Nagor silicone of different hardness (soft, medium and hard) were swollen, until they reached equilibrium (i.e. constant mass) in five liquids at 25°C, before and after accelerated aging. For the specimens swollen before accelerated aging, the greatest swelling was obtained in methyl cyclohexane, while for the specimens swollen after accelerated aging, the greatest swelling was obtained in cyclohexane. The cross-link density, υ, was also calculated from the swelling measurements for all the specimens, before and after accelerated aging, using the Flory-Rehner equation. The softer silicones, which swelled the most, had lower υ values than harder silicones. The amount of swelling (measured in terms of ϕ) and υ varied significantly (p<0.05) in some cases, between the different silicone hardness and between different liquids. Furthermore, the cross-link density, υ, significantly (p<0.05) increased after accelerated aging in most liquids.Note: ϕ is defined as the volume fraction of polymer in its equilibrium swollen state. A probability value of statistical significance of 0.05 or 5% was selected, hence if a p value of less than 0.05 was obtained, the null hypothesis was rejected (i.e. significant if p<0.05).
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Contagious seed dispersal beneath heterospecific fruiting trees and its consequences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwit, Charles; Levey, Douglas, J.; Greenberg, Cathyrn, H.
2004-05-03
Kwit, Charles, D.J. Levey and Cathryn H. Greenberg. 2004. Contagious seed dispersal beneath heterospecific fruiting trees and its consequences. Oikos. 107:303-308 A n hypothesized advantage of seed dispersal is avoidance of high per capita mortality (i.e. density-dependent mortality) associated with dense populations of seeds and seedlings beneath parent trees. This hypothesis, inherent in nearly all seed dispersal studies, assumes that density effects are species-specific. Yet because many tree species exhibit overlapping fruiting phenologies and share dispersers, seeds may be deposited preferentially under synchronously fruiting heterospecific trees, another location where they may be particularly vulnerable to mortality, in this case bymore » generalist seed predators. We demonstrate that frugivores disperse higher densities of Cornus florida seeds under fruiting (female) I lex opaca trees than under non-fruiting (male) I lex trees in temperate hardwood forest settings in South Carolina, U SA . To determine if density of Cornus and/or I lex seeds influences survivorship of dispersed Cornus seeds, we followed the fates of experimentally dispersed Cornus seeds in neighborhoods of differing, manipulated background densities of Cornus and I lex seeds. We found that the probability of predation on dispersed Cornus seeds was a function of both Cornus and I lex background seed densities. H igher densities of I lex seeds negatively affected Cornus seed survivorship, and this was particularly evident as background densities of dispersed Cornus seeds increased. These results illustrate the importance of viewing seed dispersal and predation in a community context, as the pattern and intensity of density-dependent mortality may not be solely a function of conspecific densities.« less
Derived distribution of floods based on the concept of partial area coverage with a climatic appeal
NASA Astrophysics Data System (ADS)
Iacobellis, Vito; Fiorentino, Mauro
2000-02-01
A new rationale for deriving the probability distribution of floods and help in understanding the physical processes underlying the distribution itself is presented. On the basis of this a model that presents a number of new assumptions is developed. The basic ideas are as follows: (1) The peak direct streamflow Q can always be expressed as the product of two random variates, namely, the average runoff per unit area ua and the peak contributing area a; (2) the distribution of ua conditional on a can be related to that of the rainfall depth occurring in a duration equal to a characteristic response time тa of the contributing part of the basin; and (3) тa is assumed to vary with a according to a power law. Consequently, the probability density function of Q can be found as the integral, over the total basin area A of that of a times the density function of ua given a. It is suggested that ua can be expressed as a fraction of the excess rainfall and that the annual flood distribution can be related to that of Q by the hypothesis that the flood occurrence process is Poissonian. In the proposed model it is assumed, as an exploratory attempt, that a and ua are gamma and Weibull distributed, respectively. The model was applied to the annual flood series of eight gauged basins in Basilicata (southern Italy) with catchment areas ranging from 40 to 1600 km2. The results showed strong physical consistence as the parameters tended to assume values in good agreement with well-consolidated geomorphologic knowledge and suggested a new key to understanding the climatic control of the probability distribution of floods.
Sedinger, J.S.; Chelgren, N.D.; Ward, D.H.; Lindberg, M.S.
2008-01-01
1. Patterns of temporary emigration (associated with non-breeding) are important components of variation in individual quality. Permanent emigration from the natal area has important implications for both individual fitness and local population dynamics. 2. We estimated both permanent and temporary emigration of black brent geese (Branta bernicla nigricans Lawrence) from the Tutakoke River colony, using observations of marked brent geese on breeding and wintering areas, and recoveries of ringed individuals by hunters. We used the likelihood developed by Lindberg, Kendall, Hines & Anderson 2001 (Combining band recovery data and Pollock's robust design to model temporary and permanent emigration. Biometrics, 57, 273-281) to assess hypotheses and estimate parameters. 3. Temporary emigration (the converse of breeding) varied among age classes up to age 5, and differed between individuals that bred in the previous years vs. those that did not. Consistent with the hypothesis of variation in individual quality, individuals with a higher probability of breeding in one year also had a higher probability of breeding the next year. 4. Natal fidelity of females ranged from 0.70 ?? 0.07-0.96 ?? 0.18 and averaged 0.83. In contrast to Lindberg et al. (1998), we did not detect a relationship between fidelity and local population density. Natal fidelity was negatively correlated with first-year survival, suggesting that competition among individuals of the same age for breeding territories influenced dispersal. Once females nested at the Tutakoke River, colony breeding fidelity was 1.0. 5. Our analyses show substantial variation in individual quality associated with fitness, which other analyses suggest is strongly influenced by early environment. Our analyses also suggest substantial interchange among breeding colonies of brent geese, as first shown by Lindberg et al. (1998).
An Exercise for Illustrating the Logic of Hypothesis Testing
ERIC Educational Resources Information Center
Lawton, Leigh
2009-01-01
Hypothesis testing is one of the more difficult concepts for students to master in a basic, undergraduate statistics course. Students often are puzzled as to why statisticians simply don't calculate the probability that a hypothesis is true. This article presents an exercise that forces students to lay out on their own a procedure for testing a…
Muiruri, Evalyne W; Rainio, Kalle; Koricheva, Julia
2016-03-01
The enemies hypothesis states that reduced insect herbivory in mixed-species stands can be attributed to more effective top-down control by predators with increasing plant diversity. Although evidence for this mechanism exists for invertebrate predators, studies on avian predation are comparatively rare and have not explicitly tested the effects of diversity at different spatial scales, even though heterogeneity at macro- and micro-scales can influence bird foraging selection. We studied bird predation in an established forest diversity experiment in SW Finland, using artificial larvae installed on birch, alder and pine trees. Effects of tree species diversity and densities on bird predation were tested at two different scales: between plots and within the neighbourhood around focal trees. At the neighbourhood scale, birds preferentially foraged on focal trees surrounded by a higher diversity of neighbours. However, predation rates did not increase with tree species richness at the plot level and were instead negatively affected by tree height variation within the plot. The highest probability of predation was observed on pine, and rates of predation increased with the density of pine regardless of scale. Strong tree species preferences observed may be due to a combination of innate bird species preferences and opportunistic foraging on profitable-looking artificial prey. This study therefore finds partial support for the enemies hypothesis and highlights the importance of spatial scale and focal tree species in modifying trophic interactions between avian predators and insect herbivores in forest ecosystems.
The Heuristic Value of p in Inductive Statistical Inference
Krueger, Joachim I.; Heck, Patrick R.
2017-01-01
Many statistical methods yield the probability of the observed data – or data more extreme – under the assumption that a particular hypothesis is true. This probability is commonly known as ‘the’ p-value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p-value has been subjected to much speculation, analysis, and criticism. We explore how well the p-value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p-value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p-value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say. PMID:28649206
The Heuristic Value of p in Inductive Statistical Inference.
Krueger, Joachim I; Heck, Patrick R
2017-01-01
Many statistical methods yield the probability of the observed data - or data more extreme - under the assumption that a particular hypothesis is true. This probability is commonly known as 'the' p -value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p -value has been subjected to much speculation, analysis, and criticism. We explore how well the p -value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p -value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p -value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.
Fixation probabilities of evolutionary coordination games on two coupled populations
NASA Astrophysics Data System (ADS)
Zhang, Liye; Ying, Limin; Zhou, Jie; Guan, Shuguang; Zou, Yong
2016-09-01
Evolutionary forces resulted from competitions between different populations are common, which change the evolutionary behavior of a single population. In an isolated population of coordination games of two strategies (e.g., s1 and s2), the previous studies focused on determining the fixation probability that the system is occupied by only one strategy (s1) and their expectation times, given an initial mixture of two strategies. In this work, we propose a model of two interdependent populations, disclosing the effects of the interaction strength on fixation probabilities. In the well-mixing limit, a detailed linear stability analysis is performed, which allows us to find and to classify the different equilibria, yielding a clear picture of the bifurcation patterns in phase space. We demonstrate that the interactions between populations crucially alter the dynamic behavior. More specifically, if the coupling strength is larger than some threshold value, the critical initial density of one strategy (s1) that corresponds to fixation is significantly delayed. Instead, the two populations evolve to the opposite state of all (s2) strategy, which are in favor of the red queen hypothesis. We delineate the extinction time of strategy (s1) explicitly, which is an exponential form. These results are validated by systematic numerical simulations.
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.
Estimating loblolly pine size-density trajectories across a range of planting densities
Curtis L. VanderSchaaf; Harold E. Burkhart
2013-01-01
Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...
ERIC Educational Resources Information Center
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…
Parkinson Disease Detection from Speech Articulation Neuromechanics.
Gómez-Vilda, Pedro; Mekyska, Jiri; Ferrández, José M; Palacios-Alonso, Daniel; Gómez-Rodellar, Andrés; Rodellar-Biarge, Victoria; Galaz, Zoltan; Smekal, Zdenek; Eliasova, Ilona; Kostalova, Milena; Rektorova, Irena
2017-01-01
Aim: The research described is intended to give a description of articulation dynamics as a correlate of the kinematic behavior of the jaw-tongue biomechanical system, encoded as a probability distribution of an absolute joint velocity. This distribution may be used in detecting and grading speech from patients affected by neurodegenerative illnesses, as Parkinson Disease. Hypothesis: The work hypothesis is that the probability density function of the absolute joint velocity includes information on the stability of phonation when applied to sustained vowels, as well as on fluency if applied to connected speech. Methods: A dataset of sustained vowels recorded from Parkinson Disease patients is contrasted with similar recordings from normative subjects. The probability distribution of the absolute kinematic velocity of the jaw-tongue system is extracted from each utterance. A Random Least Squares Feed-Forward Network (RLSFN) has been used as a binary classifier working on the pathological and normative datasets in a leave-one-out strategy. Monte Carlo simulations have been conducted to estimate the influence of the stochastic nature of the classifier. Two datasets for each gender were tested (males and females) including 26 normative and 53 pathological subjects in the male set, and 25 normative and 38 pathological in the female set. Results: Male and female data subsets were tested in single runs, yielding equal error rates under 0.6% (Accuracy over 99.4%). Due to the stochastic nature of each experiment, Monte Carlo runs were conducted to test the reliability of the methodology. The average detection results after 200 Montecarlo runs of a 200 hyperplane hidden layer RLSFN are given in terms of Sensitivity (males: 0.9946, females: 0.9942), Specificity (males: 0.9944, females: 0.9941) and Accuracy (males: 0.9945, females: 0.9942). The area under the ROC curve is 0.9947 (males) and 0.9945 (females). The equal error rate is 0.0054 (males) and 0.0057 (females). Conclusions: The proposed methodology avails that the use of highly normalized descriptors as the probability distribution of kinematic variables of vowel articulation stability, which has some interesting properties in terms of information theory, boosts the potential of simple yet powerful classifiers in producing quite acceptable detection results in Parkinson Disease.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
ERIC Educational Resources Information Center
Trafimow, David
2017-01-01
There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
Ockham's razor and Bayesian analysis. [statistical theory for systems evaluation
NASA Technical Reports Server (NTRS)
Jefferys, William H.; Berger, James O.
1992-01-01
'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
Storkel, Holly L.; Lee, Jaehoon; Cox, Casey
2016-01-01
Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276
Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey
2016-11-01
Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.
A critique of statistical hypothesis testing in clinical research
Raha, Somik
2011-01-01
Many have documented the difficulty of using the current paradigm of Randomized Controlled Trials (RCTs) to test and validate the effectiveness of alternative medical systems such as Ayurveda. This paper critiques the applicability of RCTs for all clinical knowledge-seeking endeavors, of which Ayurveda research is a part. This is done by examining statistical hypothesis testing, the underlying foundation of RCTs, from a practical and philosophical perspective. In the philosophical critique, the two main worldviews of probability are that of the Bayesian and the frequentist. The frequentist worldview is a special case of the Bayesian worldview requiring the unrealistic assumptions of knowing nothing about the universe and believing that all observations are unrelated to each other. Many have claimed that the first belief is necessary for science, and this claim is debunked by comparing variations in learning with different prior beliefs. Moving beyond the Bayesian and frequentist worldviews, the notion of hypothesis testing itself is challenged on the grounds that a hypothesis is an unclear distinction, and assigning a probability on an unclear distinction is an exercise that does not lead to clarity of action. This critique is of the theory itself and not any particular application of statistical hypothesis testing. A decision-making frame is proposed as a way of both addressing this critique and transcending ideological debates on probability. An example of a Bayesian decision-making approach is shown as an alternative to statistical hypothesis testing, utilizing data from a past clinical trial that studied the effect of Aspirin on heart attacks in a sample population of doctors. As a big reason for the prevalence of RCTs in academia is legislation requiring it, the ethics of legislating the use of statistical methods for clinical research is also examined. PMID:22022152
a Theoretical and Experimental Investigation of 1/F Noise in the Alpha Decay Rates of AMERICIUM-241.
NASA Astrophysics Data System (ADS)
Pepper, Gary T.
New experimental methods and data analysis techniques were used to investigate the hypothesis of the existence of 1/f noise in a alpha particle emission rates for ^{241}Am. Experimental estimates of the flicker floor were found to be almost two orders of magnitude less than Handel's theoretical prediction and previous measurements. The existence of a flicker floor for ^{57}Co decay, a process for which no charged particles are emitted, indicate that instrumental instability is likely responsible for the values of the flicker floor obtained. The experimental results and the theoretical arguments presented indicate that a re-examination of Handel's theory of 1/f noise is appropriate. Methods of numerical simulation of noise processes with a 1/f^{rm n} power spectral density were developed. These were used to investigate various statistical aspects of 1/f ^{rm n} noise. The probability density function for the Allan variance was investigated in order to establish confidence limits for the observations made. The effect of using grouped (correlated) data, for evaluating the Allan variance, was also investigated.
Prey density and the behavioral flexibility of a marine predator: The common murre (Uria aalge)
Harding, A.M.A.; Piatt, John F.; Schmutz, J.A.; Shultz, M.T.; van Pelt, Thomas I.; Kettle, Arthur B.; Speckman, Suzann G.
2007-01-01
Flexible time budgets allow individual animals to buffer the effects of variable food availability by allocating more time to foraging when food density decreases. This trait should be especially important for marine predators that forage on patchy and ephemeral food resources. We examined flexible time allocation by a long-lived marine predator, the Common Murre (Uria aalge), using data collected in a five-year study at three colonies in Alaska (USA) with contrasting environmental conditions. Annual hydroacoustic surveys revealed an order-of-magnitude variation in food density among the 15 colony-years of study. We used data on parental time budgets and local prey density to test predictions from two hypotheses: Hypothesis A, the colony attendance of seabirds varies nonlinearly with food density; and Hypothesis B, flexible time allocation of parent murres buffers chicks against variable food availability. Hypothesis A was supported; colony attendance by murres was positively correlated with food over a limited range of poor-to-moderate food densities, but independent of food over a broader range of higher densities. This is the first empirical evidence for a nonlinear response of a marine predator's time budget to changes in prey density. Predictions from Hypothesis B were largely supported: (1) chick-feeding rates were fairly constant over a wide range of densities and only dropped below 3.5 meals per day at the low end of prey density, and (2) there was a nonlinear relationship between chick-feeding rates and time spent at the colony, with chick-feeding rates only declining after time at the colony by the nonbrooding parent was reduced to a minimum. The ability of parents to adjust their foraging time by more than 2 h/d explains why they were able to maintain chick-feeding rates of more than 3.5 meals/d across a 10-fold range in local food density. ?? 2007 by the Ecological Society of America.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A
2005-01-01
To determine the frequency of sampling in small water distribution systems (<5,000 inhabitants) and compare the results according to different hypotheses in bacteria distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value <0.001) and the probability of II type error with the assumption of heterogeneity was higher with 4 samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.
A comparator-hypothesis account of biased contingency detection.
Vadillo, Miguel A; Barberia, Itxaso
2018-02-12
Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection. Copyright © 2018 Elsevier B.V. All rights reserved.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyż, W.; Zalewski, K.
2005-10-01
It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
Probability of stress-corrosion fracture under random loading
NASA Technical Reports Server (NTRS)
Yang, J. N.
1974-01-01
Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.
NASA Astrophysics Data System (ADS)
Kageyama, Daisuke; Anbutsu, Hisashi; Shimada, Masakazu; Fukatsu, Takema
2007-04-01
Symbiont-induced male-killing phenotypes have been found in a variety of insects. Conventionally, these phenotypes have been divided into two categories according to the timing of action: early male killing at embryonic stages and late male killing at late larval stages. In Drosophila species, endosymbiotic bacteria of the genus Spiroplasma have been known to cause early male killing. Here, we report that a spiroplasma strain normally causing early male killing also induces late male killing depending on the maternal host age: male-specific mortality of larvae and pupae was more frequently observed in the offspring of young females. As the lowest spiroplasma density and occasional male production were also associated with newly emerged females, we proposed the density-dependent hypothesis for the expression of early and late male-killing phenotypes. Our finding suggested that (1) early and late male-killing phenotypes can be caused by the same symbiont and probably by the same mechanism; (2) late male killing may occur as an attenuated expression of early male killing; (3) expression of early and late male-killing phenotypes may be dependent on the symbiont density, and thus, could potentially be affected by the host immunity and regulation; and (4) early male killing and late male killing could be alternative strategies adopted by microbial reproductive manipulators.
A Bayesian Method for Evaluating and Discovering Disease Loci Associations
Jiang, Xia; Barmada, M. Michael; Cooper, Gregory F.; Becich, Michael J.
2011-01-01
Background A genome-wide association study (GWAS) typically involves examining representative SNPs in individuals from some population. A GWAS data set can concern a million SNPs and may soon concern billions. Researchers investigate the association of each SNP individually with a disease, and it is becoming increasingly commonplace to also analyze multi-SNP associations. Techniques for handling so many hypotheses include the Bonferroni correction and recently developed Bayesian methods. These methods can encounter problems. Most importantly, they are not applicable to a complex multi-locus hypothesis which has several competing hypotheses rather than only a null hypothesis. A method that computes the posterior probability of complex hypotheses is a pressing need. Methodology/Findings We introduce the Bayesian network posterior probability (BNPP) method which addresses the difficulties. The method represents the relationship between a disease and SNPs using a directed acyclic graph (DAG) model, and computes the likelihood of such models using a Bayesian network scoring criterion. The posterior probability of a hypothesis is computed based on the likelihoods of all competing hypotheses. The BNPP can not only be used to evaluate a hypothesis that has previously been discovered or suspected, but also to discover new disease loci associations. The results of experiments using simulated and real data sets are presented. Our results concerning simulated data sets indicate that the BNPP exhibits both better evaluation and discovery performance than does a p-value based method. For the real data sets, previous findings in the literature are confirmed and additional findings are found. Conclusions/Significance We conclude that the BNPP resolves a pressing problem by providing a way to compute the posterior probability of complex multi-locus hypotheses. A researcher can use the BNPP to determine the expected utility of investigating a hypothesis further. Furthermore, we conclude that the BNPP is a promising method for discovering disease loci associations. PMID:21853025
ERIC Educational Resources Information Center
Varga, Julia
2006-01-01
This paper analyses students' application strategies to higher education, the effects of labour market expectations and admission probabilities. The starting hypothesis of this study is that students consider the expected utility of their choices, a function of expected net lifetime earnings and the probability of admission. Based on a survey…
Masculinity-femininity predicts sexual orientation in men but not in women.
Udry, J Richard; Chantala, Kim
2006-11-01
Using the nationally representative sample of about 15,000 Add Health respondents in Wave III, the hypothesis is tested that masculinity-femininity in adolescence is correlated with sexual orientation 5 years later and 6 years later: that is, that for adolescent males in 1995 and again in 1996, more feminine males have a higher probability of self-identifying as homosexuals in 2001-02. It is predicted that for adolescent females in 1995 and 1996, more masculine females have a higher probability of self-identifying as homosexuals in 2001-02. Masculinity-femininity is measured by the classical method used by Terman & Miles. For both time periods, the hypothesis was strongly confirmed for males: the more feminine males had several times the probability of being attracted to same-sex partners, several times the probability of having same-sex partners, and several times the probability of self-identifying as homosexuals, compared with more masculine males. For females, no relationship was found at either time period between masculinity and sex of preference. The biological mechanism underlying homosexuality may be different for males and females.
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
The relationship between species detection probability and local extinction probability
Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.
2004-01-01
In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.
Stauffer, Glenn E.; Rotella, Jay J.; Garrott, Robert A.; Kendall, William L.
2014-01-01
In colonial-breeding species, prebreeders often emigrate temporarily from natal reproductive colonies then subsequently return for one or more years before producing young. Variation in attendance–nonattendance patterns can have implications for subsequent recruitment. We used open robust-design multistate models and 28 years of encounter data for prebreeding female Weddell seals (Leptonychotes weddellii [Lesson]) to evaluate hypotheses about (1) the relationships of temporary emigration (TE) probabilities to environmental and population size covariates and (2) motivations for attendance and consequences of nonattendance for subsequent probability of recruitment to the breeding population. TE probabilities were density dependent (βˆBPOP = 0.66, = 0.17; estimated effects [β] and standard errors of population size in the previous year) and increased when the fast-ice edge was distant from the breeding colonies (βˆDIST = 0.75, = 0.04; estimated effects and standard errors of distance to the sea-ice edge in the current year on TE probability in the current year) and were strongly age and state dependent. These results suggest that trade-offs between potential benefits and costs of colony attendance vary annually and might influence motivation to attend colonies. Recruitment probabilities were greatest for seals that consistently attended colonies in two or more years (e.g., = 0.56, SD = 0.17) and lowest for seals that never or inconsistently attended prior to recruitment (e.g., = 0.32, SD = 0.15), where denotes the mean recruitment probability (over all years) for 10-year-old seals for the specified prebreeder state. In colonial-breeding seabirds, repeated colony attendance increases subsequent probability of recruitment to the adult breeding population; our results suggest similar implications for a marine mammal and are consistent with the hypothesis that prebreeders were motivated to attend reproductive colonies to gain reproductive skills or perhaps to optimally synchronize estrus through close association with mature breeding females.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
NASA Astrophysics Data System (ADS)
Muluneh Bitew, Alemayehu; Keesstra, Saskia; Stroosnijder, Leo
2015-04-01
Maize yield in the Central Rift Valley of Ethiopia (CRV) suffers from dry spells at sensitive growth stages. Risk of crop failure makes farmers reluctant to invest in fertilizer. This makes the CRV food insecure. There are farms with well-maintained terraces and Rain Water Harvesting (RWH) systems using concrete farms ponds. We tested the hypothesis that in these farms supplemental irrigation with simultaneous crop intensification might boost production of a small maize area sufficient to improve food security. Intensification includes a higher plant density of a hybrid variety under optimum fertilization. First we assessed the probability of occurrence of dry spells. Then we estimated the availability of sufficient runoff in the ponds in dry years. During 2012 (dry) and 2013 (wet) on-farm field research was conducted with 10 combinations of supplemental irrigation and plant density. The simplest was rainfed farming with 30,000 plants ha-1. The most advanced was no water stress and 75,000 plants ha-1. Finally we compared our on-farm yield with that of neighbouring farmers. Because 2013 was a wet year no irrigation was needed. Our long term daily rainfall (1970-2011) analysis proves the occurrence of dry spells during the onset of the maize (Belg months March and April). In March there is hardly enough water in the ponds. So, we advise later sowing. Starting from April available water (runoff from a 2.2 ha catchment) matches crop water requirement (for 0.5 ha maize). Significant differences between grain and total biomass yield were observed between rainfed and other irrigation levels. However, since the largest difference is only 12%, the investment in irrigation non-critical drought years is not worth the effort. There was also a limited effect (18-22%) of increasing plant density. So, we advise not to use more than 45,000 plants ha-1. The grain yield and total biomass difference between farmers own practice and our on-farm research was 101% and 84% respectively in 2012. This large increase in grain yield is contributed to the higher use of (150% recommended) of fertilizer against the current use (50% or less) by adjacent farmers. Our hypothesis was that supplemental irrigation in combination with increased plant density would greatly increase grain yield. This hypothesis could not be proven with our 2 years experiment. Our experiment, once again, suggests that yield lower than attainable is not a matter of water shortage but rather an effect of lack of fertilizer.
Evaluating detection probabilities for American marten in the Black Hills, South Dakota
Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.
2007-01-01
Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.
Cooperative Localization for Multi-AUVs Based on GM-PHD Filters and Information Entropy Theory
Zhang, Lichuan; Wang, Tonghao; Xu, Demin
2017-01-01
Cooperative localization (CL) is considered a promising method for underwater localization with respect to multiple autonomous underwater vehicles (multi-AUVs). In this paper, we proposed a CL algorithm based on information entropy theory and the probability hypothesis density (PHD) filter, aiming to enhance the global localization accuracy of the follower. In the proposed framework, the follower carries lower cost navigation systems, whereas the leaders carry better ones. Meanwhile, the leaders acquire the followers’ observations, including both measurements and clutter. Then, the PHD filters are utilized on the leaders and the results are communicated to the followers. The followers then perform weighted summation based on all received messages and obtain a final positioning result. Based on the information entropy theory and the PHD filter, the follower is able to acquire a precise knowledge of its position. PMID:28991191
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, E.; Powell, J.F.; Sham, P.
1995-10-09
We describe a method of systematically searching for major genes in disorders of unknown mode of inheritance, using linkage analysis. Our method is designed to minimize the probability of missing linkage due to inadequate exploration of data. We illustrate this method with the results of a search for a locus for schizophrenia on chromosome 12 using 22 highly polymorphic markers in 23 high density pedigrees. The markers span approximately 85-90% of the chromosome and are on average 9.35 cM apart. We have analysed the data using the most plausible current genetic models and allowing for the presence of genetic heterogeneity.more » None of the markers was supportive of linkage and the distribution of the heterogeneity statistics was in accordance with the null hypothesis. 53 refs., 2 figs., 4 tabs.« less
The resolution of point sources of light as analyzed by quantum detection theory
NASA Technical Reports Server (NTRS)
Helstrom, C. W.
1972-01-01
The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four…
Killeen's (2005) "p[subscript rep]" Coefficient: Logical and Mathematical Problems
ERIC Educational Resources Information Center
Maraun, Michael; Gabriel, Stephanie
2010-01-01
In his article, "An Alternative to Null-Hypothesis Significance Tests," Killeen (2005) urged the discipline to abandon the practice of "p[subscript obs]"-based null hypothesis testing and to quantify the signal-to-noise characteristics of experimental outcomes with replication probabilities. He described the coefficient that he…
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
New support for an old hypothesis: density affects extra-pair paternity
Mayer, Christian; Pasinelli, Gilberto
2013-01-01
Density has been suggested to affect variation in extra-pair paternity (EPP) in avian mating systems, because increasing density promotes encounter rates and thus mating opportunities. However, the significance of density affecting EPP variation in intra- and interspecific comparisons has remained controversial, with more support from intraspecific comparisons. Neither experimental nor empirical studies have consistently provided support for the density hypothesis. Testing the density hypothesis is challenging because density measures may not necessarily reflect extra-pair mating opportunities, mate guarding efforts may covary with density, populations studied may differ in migratory behavior and/or climatic conditions, and variation in density may be insufficient. Accounting for these potentially confounding factors, we tested whether EPP rates within and among subpopulations of the reed bunting (Emberiza schoeniclus) were related to density. Our analyses were based on data from 13 subpopulations studied over 4 years. Overall, 56.4% of totally 181 broods contained at least one extra-pair young (EPY) and 37.1% of totally 669 young were of extra-pair origin. Roughly 90% of the extra-pair fathers were from the adjacent territory or from the territory after the next one. Within subpopulations, the proportion of EPY in broods was positively related to local breeding density. Similarly, among subpopulations, proportion of EPY was positively associated with population density. EPP was absent in subpopulations consisting of single breeding pairs, that is, without extra-pair mating opportunities. Our study confirms that density is an important biological factor, which significantly influences the amount of EPP within and among subpopulations, but also suggests that other mechanisms influence EPP beyond the variation explained by density. PMID:23533071
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Postfragmentation density function for bacterial aggregates in laminar flow
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John
2014-01-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205
Distributed Immune Systems for Wireless Network Information Assurance
2010-04-26
ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability
A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.
Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo
2016-01-01
In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Diederich, Adele
2008-02-01
Recently, Diederich and Busemeyer (2006) evaluated three hypotheses formulated as particular versions of a sequential-sampling model to account for the effects of payoffs in a perceptual decision task with time constraints. The bound-change hypothesis states that payoffs affect the distance of the starting position of the decision process to each decision bound. The drift-rate-change hypothesis states that payoffs affect the drift rate of the decision process. The two-stage-processing hypothesis assumes two processes, one for processing payoffs and another for processing stimulus information, and that on a given trial, attention switches from one process to the other. The latter hypothesis gave the best account of their data. The present study investigated two questions: (1) Does the experimental setting influence decisions, and consequently affect the fits of the hypotheses? A task was conducted in two experimental settings--either the time limit or the payoff matrix was held constant within a given block of trials, using three different payoff matrices and four different time limits--in order to answer this question. (2) Could it be that participants neglect payoffs on some trials and stimulus information on others? To investigate this idea, a further hypothesis was considered, the mixture-of-processes hypothesis. Like the two-stage-processing hypothesis, it postulates two processes, one for payoffs and another for stimulus information. However, it differs from the previous hypothesis in assuming that on a given trial exactly one of the processes operates, never both. The present design had no effect on choice probability but may have affected choice response times (RTs). Overall, the two-stage-processing hypothesis gave the best account, with respect both to choice probabilities and to observed mean RTs and mean RT patterns within a choice pair.
Test of the Brink-Axel Hypothesis for the Pygmy Dipole Resonance
NASA Astrophysics Data System (ADS)
Martin, D.; von Neumann-Cosel, P.; Tamii, A.; Aoi, N.; Bassauer, S.; Bertulani, C. A.; Carter, J.; Donaldson, L.; Fujita, H.; Fujita, Y.; Hashimoto, T.; Hatanaka, K.; Ito, T.; Krugmann, A.; Liu, B.; Maeda, Y.; Miki, K.; Neveling, R.; Pietralla, N.; Poltoratska, I.; Ponomarev, V. Yu.; Richter, A.; Shima, T.; Yamamoto, T.; Zweidinger, M.
2017-11-01
The gamma strength function and level density of 1- states in 96Mo have been extracted from a high-resolution study of the (p → , p→ ' ) reaction at 295 MeV and extreme forward angles. By comparison with compound nucleus γ decay experiments, this allows a test of the generalized Brink-Axel hypothesis in the energy region of the pygmy dipole resonance. The Brink-Axel hypothesis is commonly assumed in astrophysical reaction network calculations and states that the gamma strength function in nuclei is independent of the structure of the initial and final state. The present results validate the Brink-Axel hypothesis for 96Mo and provide independent confirmation of the methods used to separate gamma strength function and level density in γ decay experiments.
Chalfoun, A.D.; Martin, T.E.
2009-01-01
Predation is an important and ubiquitous selective force that can shape habitat preferences of prey species, but tests of alternative mechanistic hypotheses of habitat influences on predation risk are lacking. 2. We studied predation risk at nest sites of a passerine bird and tested two hypotheses based on theories of predator foraging behaviour. The total-foliage hypothesis predicts that predation will decline in areas of greater overall vegetation density by impeding cues for detection by predators. The potential-prey-site hypothesis predicts that predation decreases where predators must search more unoccupied potential nest sites. 3. Both observational data and results from a habitat manipulation provided clear support for the potential-prey-site hypothesis and rejection of the total-foliage hypothesis. Birds chose nest patches containing both greater total foliage and potential nest site density (which were correlated in their abundance) than at random sites, yet only potential nest site density significantly influenced nest predation risk. 4. Our results therefore provided a clear and rare example of adaptive nest site selection that would have been missed had structural complexity or total vegetation density been considered alone. 5. Our results also demonstrated that interactions between predator foraging success and habitat structure can be more complex than simple impedance or occlusion by vegetation. ?? 2008 British Ecological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.
Pandolfi, Maurizio; Carreras, Giulia
2018-06-07
It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.
A case cluster of variant Creutzfeldt-Jakob disease linked to the Kingdom of Saudi Arabia.
Coulthart, Michael B; Geschwind, Michael D; Qureshi, Shireen; Phielipp, Nicolas; Demarsh, Alex; Abrams, Joseph Y; Belay, Ermias; Gambetti, Pierluigi; Jansen, Gerard H; Lang, Anthony E; Schonberger, Lawrence B
2016-10-01
As of mid-2016, 231 cases of variant Creutzfeldt-Jakob disease-the human form of a prion disease of cattle, bovine spongiform encephalopathy-have been reported from 12 countries. With few exceptions, the affected individuals had histories of extended residence in the UK or other Western European countries during the period (1980-96) of maximum global risk for human exposure to bovine spongiform encephalopathy. However, the possibility remains that other geographic foci of human infection exist, identification of which may help to foreshadow the future of the epidemic. We report results of a quantitative analysis of country-specific relative risks of infection for three individuals diagnosed with variant Creutzfeldt-Jakob disease in the USA and Canada. All were born and raised in Saudi Arabia, but had histories of residence and travel in other countries. To calculate country-specific relative probabilities of infection, we aligned each patient's life history with published estimates of probability distributions of incubation period and age at infection parameters from a UK cohort of 171 variant Creutzfeldt-Jakob disease cases. The distributions were then partitioned into probability density fractions according to time intervals of the patient's residence and travel history, and the density fractions were combined by country. This calculation was performed for incubation period alone, age at infection alone, and jointly for incubation and age at infection. Country-specific fractions were normalized either to the total density between the individual's dates of birth and symptom onset ('lifetime'), or to that between 1980 and 1996, for a total of six combinations of parameter and interval. The country-specific relative probability of infection for Saudi Arabia clearly ranked highest under each of the six combinations of parameter × interval for Patients 1 and 2, with values ranging from 0.572 to 0.998, respectively, for Patient 2 (age at infection × lifetime) and Patient 1 (joint incubation and age at infection × 1980-96). For Patient 3, relative probabilities for Saudi Arabia were not as distinct from those for other countries using the lifetime interval: 0.394, 0.360 and 0.378, respectively, for incubation period, age at infection and jointly for incubation and age at infection. However, for this patient Saudi Arabia clearly ranked highest within the 1980-96 period: 0.859, 0.871 and 0.865, respectively, for incubation period, age at infection and jointly for incubation and age at infection. These findings support the hypothesis that human infection with bovine spongiform encephalopathy occurred in Saudi Arabia. © Her Majesty the Queen in Right of Canada 2016. Reproduced with the permission of the Minister of Public Health.
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Modelling the effect of autotoxicity on density-dependent phytotoxicity.
Sinkkonen, A
2007-01-21
An established method to separate resource competition from chemical interference is cultivation of monospecific, even-aged stands. The stands grow at several densities and they are exposed to homogenously spread toxins. Hence, the dose received by individual plants is inversely related to stand density. This results in distinguishable alterations in dose-response slopes. The method is often recommended in ecological studies of allelopathy. However, many plant species are known to release autotoxic compounds. Often, the probability of autotoxicity increases as sowing density increases. Despite this, the possibility of autotoxicity is ignored when experiments including monospecific stands are designed and when their results are evaluated. In this paper, I model mathematically how autotoxicity changes the outcome of dose-response slopes as different densities of monospecific stands are grown on homogenously phytotoxic substrata. Several ecologically reasonable relations between plant density and autotoxin exposure are considered over a range of parameter values, and similarities between different relations are searched for. The models indicate that autotoxicity affects the outcome of density-dependent dose-response experiments. Autotoxicity seems to abolish the effects of other phytochemicals in certain cases, while it may augment them in other cases. Autotoxicity may alter the outcome of tests using the method of monospecific stands even if the dose of autotoxic compounds per plant is a fraction of the dose of non-autotoxic phytochemicals with similar allelopathic potential. Data from the literature support these conclusions. A faulty null hypothesis may be accepted if the autotoxic potential of a test species is overlooked in density-response experiments. On the contrary, if test species are known to be non-autotoxic, the method of monospecific stands does not need fine-tuning. The results also suggest that the possibility of autotoxicity should be investigated in many density-response bioassays that are made with even-aged plants, and that measure plant growth or germination.
Précis of statistical significance: rationale, validity, and utility.
Chow, S L
1998-04-01
The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.
ERIC Educational Resources Information Center
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Shi, Haolun; Yin, Guosheng
2018-02-21
Simon's two-stage design is one of the most commonly used methods in phase II clinical trials with binary endpoints. The design tests the null hypothesis that the response rate is less than an uninteresting level, versus the alternative hypothesis that the response rate is greater than a desirable target level. From a Bayesian perspective, we compute the posterior probabilities of the null and alternative hypotheses given that a promising result is declared in Simon's design. Our study reveals that because the frequentist hypothesis testing framework places its focus on the null hypothesis, a potentially efficacious treatment identified by rejecting the null under Simon's design could have only less than 10% posterior probability of attaining the desirable target level. Due to the indifference region between the null and alternative, rejecting the null does not necessarily mean that the drug achieves the desirable response level. To clarify such ambiguity, we propose a Bayesian enhancement two-stage (BET) design, which guarantees a high posterior probability of the response rate reaching the target level, while allowing for early termination and sample size saving in case that the drug's response rate is smaller than the clinically uninteresting level. Moreover, the BET design can be naturally adapted to accommodate survival endpoints. We conduct extensive simulation studies to examine the empirical performance of our design and present two trial examples as applications. © 2018, The International Biometric Society.
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Assessing flight safety differences between the United States regional and major airlines
NASA Astrophysics Data System (ADS)
Sharp, Broderick H.
During 2008, the U.S. domestic airline departures exceeded 28,000 flights per day. Thirty-nine or less than 0.2 of 1% of these flights resulted in operational incidents or accidents. However, even a low percentage of airline accidents and incidents continue to cause human suffering and property loss. The charge of this study was the comparison of U.S. major and regional airline safety histories. The study spans safety events from January 1982 through December 2008. In this quantitative analysis, domestic major and regional airlines were statistically tested for their flight safety differences. Four major airlines and thirty-seven regional airlines qualified for the safety study which compared the airline groups' fatal accidents, incidents, non-fatal accidents, pilot errors, and the remaining six safety event probable cause types. The six other probable cause types are mechanical failure, weather, air traffic control, maintenance, other, and unknown causes. The National Transportation Safety Board investigated each airline safety event, and assigned a probable cause to each event. A sample of 500 events was randomly selected from the 1,391 airlines' accident and incident population. The airline groups' safety event probabilities were estimated using the least squares linear regression. A probability significance level of 5% was chosen to conclude the appropriate research question hypothesis. The airline fatal accidents and incidents probability levels were 1.2% and 0.05% respectively. These two research questions did not reach the 5% significance level threshold. Therefore, the airline groups' fatal accidents and non-destructive incidents probabilities favored the airline groups' safety differences hypothesis. The linear progression estimates for the remaining three research questions were 71.5% for non-fatal accidents, 21.8% for the pilot errors, and 7.4% significance level for the six probable causes. These research questions' linear regressions are greater than the 5% level. Consequently, these three research questions favored airline groups' safety similarities hypothesis. The study indicates the U.S. domestic major airlines were safer than the regional airlines. Ideas for potential airline safety progress can examine pilot fatigue, the airline groups' hiring policies, the government's airline oversight personnel, or the comparison of individual airline's operational policies.
Test of association: which one is the most appropriate for my study?
Gonzalez-Chica, David Alejandro; Bastos, João Luiz; Duquia, Rodrigo Pereira; Bonamigo, Renan Rangel; Martínez-Mesa, Jeovany
2015-01-01
Hypothesis tests are statistical tools widely used for assessing whether or not there is an association between two or more variables. These tests provide a probability of the type 1 error (p-value), which is used to accept or reject the null study hypothesis. To provide a practical guide to help researchers carefully select the most appropriate procedure to answer the research question. We discuss the logic of hypothesis testing and present the prerequisites of each procedure based on practical examples.
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
Tier-Adjacency Is Not a Necessary Condition for Learning Phonotactic Dependencies
ERIC Educational Resources Information Center
Koo, Hahn; Callahan, Lydia
2012-01-01
One hypothesis raised by Newport and Aslin to explain how speakers learn dependencies between nonadjacent phonemes is that speakers track bigram probabilities between two segments that are adjacent to each other within a tier of their own. The hypothesis predicts that a dependency between segments separated from each other at the tier level cannot…
Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment
ERIC Educational Resources Information Center
Frane, Andrew V.
2015-01-01
Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…
ERIC Educational Resources Information Center
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
Effects of multiple predator species on green treefrog (Hyla cinerea) tadpoles
Gunzburger, M.S.; Travis, J.
2005-01-01
Prey species that occur across a range of habitats may be exposed to variable communities of multiple predator species across habitats. Predicting the combined effects of multiple predators can be complex. Many experiments evaluating the effects of multiple predators on prey confound either variation in predator density with predator identity or variation in relative predator frequency with overall predation rates. We develop a new experimental design of factorial predator combinations that maintains a constant expected predation rate, under the null hypothesis of additive predator effects. We implement this design to evaluate the combined effects of three predator species (bass, aeshnid and libellulid odonate naiads) on mortality rate of a prey species, Hyla cinerea (Schneider, 1799) tadpoles, that occurs across a range of aquatic habitats. Two predator treatments (libellulid and aeshnid + libellulid) resulted in lower tadpole mortality than any of the other predator treatments. Variation in tadpole mortality across treatments was not related to coarse variation in microhabitat use, but was likely due to intraguild predation, which occurred in all predator treatments. Hyla cinerea tadpoles have constant, low survival values when exposed to many different combinations of predator species, and predation rate probably increases linearly with predator density.
Attributes of seasonal home range influence choice of migratory strategy in white-tailed deer
Henderson, Charles R.; Mitchell, Michael S.; Myers, Woodrow L.; Lukacs, Paul M.; Nelson, Gerald P.
2018-01-01
Partial migration is a common life-history strategy among ungulates living in seasonal environments. The decision to migrate or remain on a seasonal range may be influenced strongly by access to high-quality habitat. We evaluated the influence of access to winter habitat of high quality on the probability of a female white-tailed deer (Odocoileus virginianus) migrating to a separate summer range and the effects of this decision on survival. We hypothesized that deer with home ranges of low quality in winter would have a high probability of migrating, and that survival of an individual in winter would be influenced by the quality of their home range in winter. We radiocollared 67 female white-tailed deer in 2012 and 2013 in eastern Washington, United States. We estimated home range size in winter using a kernel density estimator; we assumed the size of the home range was inversely proportional to its quality and the proportion of crop land within the home range was proportional to its quality. Odds of migrating from winter ranges increased by 3.1 per unit increase in home range size and decreased by 0.29 per unit increase in the proportion of crop land within a home range. Annual survival rate for migrants was 0.85 (SD = 0.05) and 0.84 (SD = 0.09) for residents. Our finding that an individual with a low-quality home range in winter is likely to migrate to a separate summer range accords with the hypothesis that competition for a limited amount of home ranges of high quality should result in residents having home ranges of higher quality than migrants in populations experiencing density dependence. We hypothesize that density-dependent competition for high-quality home ranges in winter may play a leading role in the selection of migration strategy by female white-tailed deer.
Stochastic model of financial markets reproducing scaling and memory in volatility return intervals
NASA Astrophysics Data System (ADS)
Gontis, V.; Havlin, S.; Kononovicius, A.; Podobnik, B.; Stanley, H. E.
2016-11-01
We investigate the volatility return intervals in the NYSE and FOREX markets. We explain previous empirical findings using a model based on the interacting agent hypothesis instead of the widely-used efficient market hypothesis. We derive macroscopic equations based on the microscopic herding interactions of agents and find that they are able to reproduce various stylized facts of different markets and different assets with the same set of model parameters. We show that the power-law properties and the scaling of return intervals and other financial variables have a similar origin and could be a result of a general class of non-linear stochastic differential equations derived from a master equation of an agent system that is coupled by herding interactions. Specifically, we find that this approach enables us to recover the volatility return interval statistics as well as volatility probability and spectral densities for the NYSE and FOREX markets, for different assets, and for different time-scales. We find also that the historical S&P500 monthly series exhibits the same volatility return interval properties recovered by our proposed model. Our statistical results suggest that human herding is so strong that it persists even when other evolving fluctuations perturbate the financial system.
ON HIGHLY CLUMPED MAGNETIC WIND MODELS FOR COOL EVOLVED STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, G. M.
2010-09-10
Recently, it has been proposed that the winds of non-pulsating and non-dusty K and M giants and supergiants may be driven by some form of magnetic pressure acting on highly clumped wind material. While many researchers believe that magnetic processes are responsible for cool evolved stellar winds, existing MHD and Alfven wave-driven wind models have magnetic fields that are essentially radial and tied to the photosphere. The clumped magnetic wind scenario is quite different in that the magnetic flux is also being carried away from the star with the wind. We test this clumped wind hypothesis by computing continuum radiomore » fluxes from the {zeta} Aur semiempirical model of Baade et al., which is based on wind-scattered line profiles. The radio continuum opacity is proportional to the electron density squared, while the line scattering opacity is proportional to the gas density. This difference in proportionality provides a test for the presence of large clumping factors. We derive the radial distribution of clump factors (CFs) for {zeta} Aur by comparing the nonthermal pressures required to produce the semiempirical velocity distribution with the expected thermal pressures. The CFs are {approx}5 throughout the sub-sonic inner wind region and then decline outward. These implied clumping factors lead to excess radio emission at 2.0 cm, while at 6.2 cm it improves agreement with the smooth unclumped model. Smaller clumping factors of {approx}2 lead to better overall agreement but also increase the discrepancy at 2 cm. These results do not support the magnetic clumped wind hypothesis and instead suggest that inherent uncertainties in the underlying semiempirical model probably dominate uncertainties in predicted radio fluxes. However, new ultraviolet line and radio continuum observations are needed to test the new generations of inhomogeneous magnetohydrodynamic wind models.« less
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
How Often Is p[subscript rep] Close to the True Replication Probability?
ERIC Educational Resources Information Center
Trafimow, David; MacDonald, Justin A.; Rice, Stephen; Clason, Dennis L.
2010-01-01
Largely due to dissatisfaction with the standard null hypothesis significance testing procedure, researchers have begun to consider alternatives. For example, Killeen (2005a) has argued that researchers should calculate p[subscript rep] that is purported to indicate the probability that, if the experiment in question were replicated, the obtained…
Toward "Constructing" the Concept of Statistical Power: An Optical Analogy.
ERIC Educational Resources Information Center
Rogers, Bruce G.
This paper presents a visual analogy that may be used by instructors to teach the concept of statistical power in statistical courses. Statistical power is mathematically defined as the probability of rejecting a null hypothesis when that null is false, or, equivalently, the probability of detecting a relationship when it exists. The analogy…
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
Do men believe that physically attractive women are more healthy and capable of having children?
Mathes, Eugene W; Arms, Clarissa; Bryant, Alicia; Fields, Jeni; Witowski, Aggie
2005-06-01
The purpose of this research was to test the hypothesis that men view physical attractiveness as an index of a woman's health and her capacity to have children. 21 men and 26 women from an introductory psychology course were shown photographs from 1972 of men and women college students, judged in 2002 to be attractive or unattractive. Subjects were asked to rate the photographed individuals' current health, the probability that they were married, the probability that they had children, and whether they had reproductive problems. The hypothesis was generally supported; the men rated the photographs of attractive women as healthier, more likely to be married, and more likely to have children.
Assessing the performance of a covert automatic target recognition algorithm
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2005-05-01
Passive radar systems exploit illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. Doing so allows them to operate covertly and inexpensively. Our research seeks to enhance passive radar systems by adding automatic target recognition (ATR) capabilities. In previous papers we proposed conducting ATR by comparing the radar cross section (RCS) of aircraft detected by a passive radar system to the precomputed RCS of aircraft in the target class. To effectively model the low-frequency setting, the comparison is made via a Rician likelihood model. Monte Carlo simulations indicate that the approach is viable. This paper builds on that work by developing a method for quickly assessing the potential performance of the ATR algorithm without using exhaustive Monte Carlo trials. This method exploits the relation between the probability of error in a binary hypothesis test under the Bayesian framework to the Chernoff information. Since the data are well-modeled as Rician, we begin by deriving a closed-form approximation for the Chernoff information between two Rician densities. This leads to an approximation for the probability of error in the classification algorithm that is a function of the number of available measurements. We conclude with an application that would be particularly cumbersome to accomplish via Monte Carlo trials, but that can be quickly addressed using the Chernoff information approach. This application evaluates the length of time that an aircraft must be tracked before the probability of error in the ATR algorithm drops below a desired threshold.
Faith Inman-Narahari; Rebecca Ostertag; Stephen P. Hubbell; Christian P. Giardina; Susan Cordell; Lawren Sack; Andrew MacDougall
2016-01-01
Conspecific density may contribute to patterns of species assembly through negative density dependence (NDD) as predicted by the Janzen-Connell hypothesis, or through facilitation (positive density dependence; PDD). Conspecific density effects are expected to be more negative in darker and wetter environments due to higher pathogen abundance and...
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Evaluation of the Dopamine Hypothesis of ADHD with PET Brain Imaging
Swanson, James
2018-01-24
The Dopamine (DA) Hypothesis of ADHD (Wender, 1971; Levy, 1990) suggests that abnormalities in the synaptic mechanisms of DA transmission may be disrupted, and specific abnormalities in DA receptors and DA transporters (DAT) have been proposed (see Swanson et al, 1998). Early studies with small samples (e.g., n = 6, Dougherty et al, 1999) used single photon emission tomography (SPECT) and the radioligand (123I Altropane) to test a theory that ADHD may be caused by an over expression of DAT and reported 'a 70% increase in age-corrected dopamine transporter density in patients with attention deficit hyperactivity disorder compared with healthy controls' and suggested that treatment with stimulant medication decreased DAT density in ADHD patients and corrected an underlying abnormality (Krause et al, 2000). The potential importance of these findings was noted by Swanson (1999): 'If true, this is a major finding and points the way for new investigations of the primary pharmacological treatment for ADHD (with the stimulant drugs - e.g., methylphenidate), for which the dopamine transporter is the primary site of action. The potential importance of this finding demands special scrutiny'. This has been provided over the past decade using Positron Emission Tomography (PET). Brain imaging studies were conducted at Brookhaven National Laboratory (BNL) in a relatively large sample of stimulant-naive adults assessed for DAT (11C cocaine) density and DA receptors (11C raclopride) availability. These studies (Volkow et al, 2007; Volkow et al, 2009) do not confirm the hypothesis of increased DAT density and suggest the opposite (i.e., decreased rather than increased DAT density), and follow-up after treatment (Wang et al, 2010) does not confirm the hypothesis that therapeutic doses of methylphenidate decrease DAT density and suggests the opposite (i.e., increased rather than decreased DAT density). The brain regions implicated by these PET imaging studies also suggest that a motivation deficit may contribute as much as an attention deficit to the manifestation of behaviors that underlie the symptoms of ADHD.
Gaussian Hypothesis Testing and Quantum Illumination.
Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario
2017-09-22
Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
More attention when speaking: does it help or does it hurt?
Nozari, Nazbanou; Thompson-Schill, Sharon L.
2013-01-01
Paying selective attention to a word in a multi-word utterance results in a decreased probability of error on that word (benefit), but an increased probability of error on the other words (cost). We ask whether excitation of the prefrontal cortex helps or hurts this cost. One hypothesis (the resource hypothesis) predicts a decrease in the cost due to the deployment of more attentional resources, while another (the focus hypothesis) predicts even greater costs due to further fine-tuning of selective attention. Our results are more consistent with the focus hypothesis: prefrontal stimulation caused a reliable increase in the benefit and a marginal increase in the cost of selective attention. To ensure that the effects are due to changes to the prefrontal cortex, we provide two checks: We show that the pattern of results is quite different if, instead, the primary motor cortex is stimulated. We also show that the stimulation-related benefits in the verbal task correlate with the stimulation-related benefits in an N-back task, which is known to tap into a prefrontal function. Our results shed light on how selective attention affects language production, and more generally, on how selective attention affects production of a sequence over time. PMID:24012690
The Global Phylogeography of Lyssaviruses - Challenging the 'Out of Africa' Hypothesis
Fooks, Anthony R.; Marston, Denise A.; Garcia-R, Juan C.
2016-01-01
Rabies virus kills tens of thousands of people globally each year, especially in resource-limited countries. Yet, there are genetically- and antigenically-related lyssaviruses, all capable of causing the disease rabies, circulating globally among bats without causing conspicuous disease outbreaks. The species richness and greater genetic diversity of African lyssaviruses, along with the lack of antibody cross-reactivity among them, has led to the hypothesis that Africa is the origin of lyssaviruses. This hypothesis was tested using a probabilistic phylogeographical approach. The nucleoprotein gene sequences from 153 representatives of 16 lyssavirus species, collected between 1956 and 2015, were used to develop a phylogenetic tree which incorporated relevant geographic and temporal data relating to the viruses. In addition, complete genome sequences from all 16 (putative) species were analysed. The most probable ancestral distribution for the internal nodes was inferred using three different approaches and was confirmed by analysis of complete genomes. These results support a Palearctic origin for lyssaviruses (posterior probability = 0.85), challenging the ‘out of Africa’ hypothesis, and suggest three independent transmission events to the Afrotropical region, representing the three phylogroups that form the three major lyssavirus clades. PMID:28036390
By-product mutualism and the ambiguous effects of harsher environments - A game-theoretic model.
De Jaegher, Kris; Hoyer, Britta
2016-03-21
We construct two-player two-strategy game-theoretic models of by-product mutualism, where our focus lies on the way in which the probability of cooperation among players is affected by the degree of adversity facing the players. In our first model, cooperation consists of the production of a public good, and adversity is linked to the degree of complementarity of the players׳ efforts in producing the public good. In our second model, cooperation consists of the defense of a public, and/or a private good with by-product benefits, and adversity is measured by the number of random attacks (e.g., by a predator) facing the players. In both of these models, our analysis confirms the existence of the so-called boomerang effect, which states that in a harsh environment, the individual player has few incentives to unilaterally defect in a situation of joint cooperation. Focusing on such an effect in isolation leads to the "common-enemy" hypothesis that a larger degree of adversity increases the probability of cooperation. Yet, we also find that a sucker effect may simultaneously exist, which says that in a harsh environment, the individual player has few incentives to unilaterally cooperate in a situation of joint defection. Looked at in isolation, the sucker effect leads to the competing hypothesis that a larger degree of adversity decreases the probability of cooperation. Our analysis predicts circumstances in which the "common enemy" hypothesis prevails, and circumstances in which the competing hypothesis prevails. Copyright © 2016 Elsevier Ltd. All rights reserved.
The role of control groups in mutagenicity studies: matching biological and statistical relevance.
Hauschke, Dieter; Hothorn, Torsten; Schäfer, Juliane
2003-06-01
The statistical test of the conventional hypothesis of "no treatment effect" is commonly used in the evaluation of mutagenicity experiments. Failing to reject the hypothesis often leads to the conclusion in favour of safety. The major drawback of this indirect approach is that what is controlled by a prespecified level alpha is the probability of erroneously concluding hazard (producer risk). However, the primary concern of safety assessment is the control of the consumer risk, i.e. limiting the probability of erroneously concluding that a product is safe. In order to restrict this risk, safety has to be formulated as the alternative, and hazard, i.e. the opposite, has to be formulated as the hypothesis. The direct safety approach is examined for the case when the corresponding threshold value is expressed either as a fraction of the population mean for the negative control, or as a fraction of the difference between the positive and negative controls.
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
The frequentist implications of optional stopping on Bayesian hypothesis tests.
Sanborn, Adam N; Hills, Thomas T
2014-04-01
Null hypothesis significance testing (NHST) is the most commonly used statistical methodology in psychology. The probability of achieving a value as extreme or more extreme than the statistic obtained from the data is evaluated, and if it is low enough, the null hypothesis is rejected. However, because common experimental practice often clashes with the assumptions underlying NHST, these calculated probabilities are often incorrect. Most commonly, experimenters use tests that assume that sample sizes are fixed in advance of data collection but then use the data to determine when to stop; in the limit, experimenters can use data monitoring to guarantee that the null hypothesis will be rejected. Bayesian hypothesis testing (BHT) provides a solution to these ills because the stopping rule used is irrelevant to the calculation of a Bayes factor. In addition, there are strong mathematical guarantees on the frequentist properties of BHT that are comforting for researchers concerned that stopping rules could influence the Bayes factors produced. Here, we show that these guaranteed bounds have limited scope and often do not apply in psychological research. Specifically, we quantitatively demonstrate the impact of optional stopping on the resulting Bayes factors in two common situations: (1) when the truth is a combination of the hypotheses, such as in a heterogeneous population, and (2) when a hypothesis is composite-taking multiple parameter values-such as the alternative hypothesis in a t-test. We found that, for these situations, while the Bayesian interpretation remains correct regardless of the stopping rule used, the choice of stopping rule can, in some situations, greatly increase the chance of experimenters finding evidence in the direction they desire. We suggest ways to control these frequentist implications of stopping rules on BHT.
Klapman, M H; Sosa, V B; Yao, J F
2014-06-01
Port wine stains in the malar area of the face can develop thickening in early adult life. We began a study with a hypothesis that this thickening can be associated with elevation of low density lipoprotein. In a retrospective review, we divided 53 subjects with malar port wine stains into 4 groups, adults 25-39 years of age with thickening, that age group without thickening, adults 40+ years of age with thickening, and that age group without thickening. Low density lipoprotein levels in the subjects were compared to age and sex matched controls randomly selected from the general Dermatology clinic. The younger subjects with thickening demonstrated significantly higher low density lipoprotein levels than their controls (p .0082) and without thickening lower low density lipoprotein levels than their controls with great significance (p .00058). The subjects without thickening also consisted mainly of women. The low density lipoprotein levels in the older age groups, whether thickened or not, demonstrated no significant difference in low density lipoprotein levels between subjects and controls. This led to a new hypothesis that there is a factor in a subgroup of young adult women with malar port wine stains that suppresses thickening and delays the elevation of low density lipoprotein and that this factor might be estrogen. The implications of this hypothesis are that it could define a marker for a subset of the population that might be protected from the diseases associated with early elevation of low density lipoprotein and provide a source of cutaneous tissue for studying the basic science of this protection (although limited by cosmetic considerations). Future laboratory research to test the new hypothesis might include testing blood of women with malar port wine stains with or without thickening for estrogen and other sex hormones. It might also include skin biopsies to study receptors for estrogen, other sex hormones, and angiogenic factors in malar port wine stains with or without thickening. Future clinical research might include a long term prospective project to study the development of low density lipoprotein related diseases in women with malar port wine stains with or without thickening over years. Copyright © 2014 Elsevier Ltd. All rights reserved.
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
Neutral aggregation in finite-length genotype space
NASA Astrophysics Data System (ADS)
Houchmandzadeh, Bahram
2017-01-01
The advent of modern genome sequencing techniques allows for a more stringent test of the neutrality hypothesis of Darwinian evolution, where all individuals have the same fitness. Using the individual-based model of Wright and Fisher, we compute the amplitude of neutral aggregation in the genome space, i.e., the probability of finding two individuals at genetic (Hamming) distance k as a function of the genome size L , population size N , and mutation probability per base ν . In well-mixed populations, we show that for N ν <1 /L , neutral aggregation is the dominant force and most individuals are found at short genetic distances from each other. For N ν >1 , on the contrary, individuals are randomly dispersed in genome space. The results are extended to a geographically dispersed population, where the controlling parameter is shown to be a combination of mutation and migration probability. The theory we develop can be used to test the neutrality hypothesis in various ecological and evolutionary systems.
NASA Astrophysics Data System (ADS)
Park, J.; Lim, Y. J.; Sung, J. H.; Kang, H. S.
2017-12-01
The widely used meteorological drought index, the Standardized Precipitation Index (SPI) basically assumes stationarity, but recent change in the climate have led to a need to review this hypothesis. In this study, a new non-stationary SPI that considers not only the modified probability distribution parameter but also the return period under the non-stationary process has been proposed. The results are evaluated for two severe drought cases during the last 10 years in South Korea. As a result, SPIs considered the non-stationary hypothesis underestimated the drought severity than the stationary SPI despite these past two droughts were recognized as significantly severe droughts. It may be caused by that the variances of summer and autumn precipitation become larger over time then it can make the shape of probability distribution function wider than before. This understanding implies that drought expressions by statistical index such as SPI can be distorted by stationary assumption and cautious approach is needed when deciding drought level considering climate changes.
NASA Astrophysics Data System (ADS)
Park, Junehyeong; Sung, Jang Hyun; Lim, Yoon-Jin; Kang, Hyun-Suk
2018-05-01
The widely used meteorological drought index, the Standardized Precipitation Index (SPI), basically assumes stationarity, but recent changes in the climate have led to a need to review this hypothesis. In this study, a new non-stationary SPI that considers not only the modified probability distribution parameter but also the return period under the non-stationary process was proposed. The results were evaluated for two severe drought cases during the last 10 years in South Korea. As a result, SPIs considered that the non-stationary hypothesis underestimated the drought severity than the stationary SPI despite that these past two droughts were recognized as significantly severe droughts. It may be caused by that the variances of summer and autumn precipitation become larger over time then it can make the probability distribution wider than before. This implies that drought expressions by statistical index such as SPI can be distorted by stationary assumption and cautious approach is needed when deciding drought level considering climate changes.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Huang, Peng; Ou, Ai-hua; Piantadosi, Steven; Tan, Ming
2014-11-01
We discuss the problem of properly defining treatment superiority through the specification of hypotheses in clinical trials. The need to precisely define the notion of superiority in a one-sided hypothesis test problem has been well recognized by many authors. Ideally designed null and alternative hypotheses should correspond to a partition of all possible scenarios of underlying true probability models P={P(ω):ω∈Ω} such that the alternative hypothesis Ha={P(ω):ω∈Ωa} can be inferred upon the rejection of null hypothesis Ho={P(ω):ω∈Ω(o)} However, in many cases, tests are carried out and recommendations are made without a precise definition of superiority or a specification of alternative hypothesis. Moreover, in some applications, the union of probability models specified by the chosen null and alternative hypothesis does not constitute a completed model collection P (i.e., H(o)∪H(a) is smaller than P). This not only imposes a strong non-validated assumption of the underlying true models, but also leads to different superiority claims depending on which test is used instead of scientific plausibility. Different ways to partition P fro testing treatment superiority often have different implications on sample size, power, and significance in both efficacy and comparative effectiveness trial design. Such differences are often overlooked. We provide a theoretical framework for evaluating the statistical properties of different specification of superiority in typical hypothesis testing. This can help investigators to select proper hypotheses for treatment comparison inclinical trial design. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Rolison, Jonathan J.; Evans, Jonathan St. B. T.; Dennis, Ian; Walsh, Clare R.
2012-01-01
Multiple cue probability learning (MCPL) involves learning to predict a criterion based on a set of novel cues when feedback is provided in response to each judgment made. But to what extent does MCPL require controlled attention and explicit hypothesis testing? The results of two experiments show that this depends on cue polarity. Learning about…
ERIC Educational Resources Information Center
Green, Dido; Lingam, Raghu; Mattocks, Calum; Riddoch, Chris; Ness, Andy; Emond, Alan
2011-01-01
The aim of the current study was to test the hypothesis that children with probable Developmental Coordination Disorder have an increased risk of reduced moderate to vigorous physical activity (MVPA), using data from a large population based study. Prospectively collected data from 4331 children (boys = 2065, girls = 2266) who had completed motor…
Conroy, M.J.; Nichols, J.D.
1984-01-01
Several important questions in evolutionary biology and paleobiology involve sources of variation in extinction rates. In all cases of which we are aware, extinction rates have been estimated from data in which the probability that an observation (e.g., a fossil taxon) will occur is related both to extinction rates and to what we term encounter probabilities. Any statistical method for analyzing fossil data should at a minimum permit separate inferences on these two components. We develop a method for estimating taxonomic extinction rates from stratigraphic range data and for testing hypotheses about variability in these rates. We use this method to estimate extinction rates and to test the hypothesis of constant extinction rates for several sets of stratigraphic range data. The results of our tests support the hypothesis that extinction rates varied over the geologic time periods examined. We also present a test that can be used to identify periods of high or low extinction probabilities and provide an example using Phanerozoic invertebrate data. Extinction rates should be analyzed using stochastic models, in which it is recognized that stratigraphic samples are random varlates and that sampling is imperfect
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
Elbroch, L Mark; Lendrum, Patrick E; Quigley, Howard; Caragiulo, Anthony
2016-03-01
There are several alternative hypotheses about the effects of territoriality, kinship and prey availability on individual carnivore distributions within populations. The first is the land-tenure hypothesis, which predicts that carnivores regulate their density through territoriality and temporal avoidance. The second is the kinship hypothesis, which predicts related individuals will be clumped within populations, and the third is the resource dispersion hypothesis, which suggests that resource richness may explain variable sociality, spatial overlap or temporary aggregations of conspecifics. Research on the socio-spatial organization of animals is essential in understanding territoriality, intra- and interspecific competition, and contact rates that influence diverse ecology, including disease transmission between conspecifics and courtship behaviours. We explored these hypotheses with data collected on a solitary carnivore, the cougar (Puma concolor), from 2005 to 2012 in the Southern Yellowstone Ecosystem, Wyoming, USA. We employed 27 annual home ranges for 13 cougars to test whether home range overlap was better explained by land tenure, kinship, resource dispersion or some combination of the three. We found support for both the land tenure and resource dispersion hypotheses, but not for kinship. Cougar sex was the primary driver explaining variation in home range overlap. Males overlapped significantly with females, whereas the remaining dyads (F-F, M-M) overlapped significantly less. In support for the resource dispersion hypothesis, hunting opportunity (the probability of a cougar killing prey in a given location) was often higher in overlapping than in non-overlapping portions of cougar home ranges. In particular, winter hunt opportunity rather than summer hunt opportunity was higher in overlapping portions of female-female and male-female home ranges. Our results may indicate that solitary carnivores are more tolerant of sharing key resources with unrelated conspecifics than previously believed, or at least during periods of high resource availability. Further, our results suggest that the resource dispersion hypothesis, which is typically applied to social species, is applicable in describing the spatial organization of solitary carnivores. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Continuous-time random-walk model for financial distributions
NASA Astrophysics Data System (ADS)
Masoliver, Jaume; Montero, Miquel; Weiss, George H.
2003-02-01
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.
ERIC Educational Resources Information Center
Storkel, Holly L.; Lee, Su-Yeon
2011-01-01
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…
Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara
2013-01-01
Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…
Exploiting target amplitude information to improve multi-target tracking
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Blair, W. Dale
2006-05-01
Closely-spaced (but resolved) targets pose a challenge for measurement-to-track data association algorithms. Since the Mahalanobis distances between measurements collected on closely-spaced targets and tracks are similar, several elements of the corresponding kinematic measurement-to-track cost matrix are also similar. Lacking any other information on which to base assignments, it is not surprising that data association algorithms make mistakes. One ad hoc approach for mitigating this problem is to multiply the kinematic measurement-to-track likelihoods by amplitude likelihoods. However, this can actually be detrimental to the measurement-to-track association process. With that in mind, this paper pursues a rigorous treatment of the hypothesis probabilities for kinematic measurements and features. Three simple scenarios are used to demonstrate the impact of basing data association decisions on these hypothesis probabilities for Rayleigh, fixed-amplitude, and Rician targets. The first scenario assumes that the tracker carries two tracks but only one measurement is collected. This provides insight into more complex scenarios in which there are fewer measurements than tracks. The second scenario includes two measurements and one track. This extends naturally to the case with more measurements than tracks. Two measurements and two tracks are present in the third scenario, which provides insight into the performance of this method when the number of measurements equals the number of tracks. In all cases, basing data association decisions on the hypothesis probabilities leads to good results.
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
NASA Astrophysics Data System (ADS)
Saletti, M.; Molnar, P.; Hassan, M. A.
2017-12-01
Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and internal geomorphic adjustment (e.g. jamming) on the response of step-pool streams, showing the potential of reduced-complexity models in fluvial geomorphology.
Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.
NASA Astrophysics Data System (ADS)
Piotrowska, M. J.; Bodnar, M.
2018-01-01
We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Competition between harvester ants and rodents in the cold desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.
1979-09-30
Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less
Developing a Hypothetical Learning Trajectory for the Sampling Distribution of the Sample Means
NASA Astrophysics Data System (ADS)
Syafriandi
2018-04-01
Special types of probability distribution are sampling distributions that are important in hypothesis testing. The concept of a sampling distribution may well be the key concept in understanding how inferential procedures work. In this paper, we will design a hypothetical learning trajectory (HLT) for the sampling distribution of the sample mean, and we will discuss how the sampling distribution is used in hypothesis testing.
More attention when speaking: does it help or does it hurt?
Nozari, Nazbanou; Thompson-Schill, Sharon L
2013-11-01
Paying selective attention to a word in a multi-word utterance results in a decreased probability of error on that word (benefit), but an increased probability of error on the other words (cost). We ask whether excitation of the prefrontal cortex helps or hurts this cost. One hypothesis (the resource hypothesis) predicts a decrease in the cost due to the deployment of more attentional resources, while another (the focus hypothesis) predicts even greater costs due to further fine-tuning of selective attention. Our results are more consistent with the focus hypothesis: prefrontal stimulation caused a reliable increase in the benefit and a marginal increase in the cost of selective attention. To ensure that the effects are due to changes to the prefrontal cortex, we provide two checks: We show that the pattern of results is quite different if, instead, the primary motor cortex is stimulated. We also show that the stimulation-related benefits in the verbal task correlate with the stimulation-related benefits in an N-back task, which is known to tap into a prefrontal function. Our results shed light on how selective attention affects language production, and more generally, on how selective attention affects production of a sequence over time. Copyright © 2013 Elsevier Ltd. All rights reserved.
Paleoindian demography and the extraterrestrial impact hypothesis
NASA Astrophysics Data System (ADS)
Buchanan, Briggs; Collard, Mark; Edinborough, Kevan
2008-08-01
Recently it has been suggested that one or more large extraterrestrial (ET) objects struck northern North America 12,900 ± 100 calendar years before present (calBP) [Firestone RB, et al. (2007) Proc Natl Acad Sci USA 104: 16016-16021]. This impact is claimed to have triggered the Younger Dryas major cooling event and resulted in the extinction of the North American megafauna. The impact is also claimed to have caused major cultural changes and population decline among the Paleoindians. Here, we report a study in which ≈1,500 radiocarbon dates from archaeological sites in Canada and the United States were used to test the hypothesis that the ET resulted in population decline among the Paleoindians. Following recent studies [e.g., Gamble C, Davies W, Pettitt P, Hazelwood L, Richards M (2005) Camb Archaeol J 15:193-223), the summed probability distribution of the calibrated dates was used to identify probable changes in human population size between 15,000 and 9,000 calBP. Subsequently, potential biases were evaluated by modeling and spatial analysis of the dated occupations. The results of the analyses were not consistent with the predictions of extraterrestrial impact hypothesis. No evidence of a population decline among the Paleoindians at 12,900 ± 100 calBP was found. Thus, minimally, the study suggests the extraterrestrial impact hypothesis should be amended.
Paleoindian demography and the extraterrestrial impact hypothesis.
Buchanan, Briggs; Collard, Mark; Edinborough, Kevan
2008-08-19
Recently it has been suggested that one or more large extraterrestrial (ET) objects struck northern North America 12,900 +/- 100 calendar years before present (calBP) [Firestone RB, et al. (2007) Proc Natl Acad Sci USA 104: 16016-16021]. This impact is claimed to have triggered the Younger Dryas major cooling event and resulted in the extinction of the North American megafauna. The impact is also claimed to have caused major cultural changes and population decline among the Paleoindians. Here, we report a study in which approximately 1,500 radiocarbon dates from archaeological sites in Canada and the United States were used to test the hypothesis that the ET resulted in population decline among the Paleoindians. Following recent studies [e.g., Gamble C, Davies W, Pettitt P, Hazelwood L, Richards M (2005) Camb Archaeol J 15:193-223), the summed probability distribution of the calibrated dates was used to identify probable changes in human population size between 15,000 and 9,000 calBP. Subsequently, potential biases were evaluated by modeling and spatial analysis of the dated occupations. The results of the analyses were not consistent with the predictions of extraterrestrial impact hypothesis. No evidence of a population decline among the Paleoindians at 12,900 +/- 100 calBP was found. Thus, minimally, the study suggests the extraterrestrial impact hypothesis should be amended.
Grez, A A; González, R H
1995-09-01
The resource concentration hypothesis (Root 1973) predicts that specialist herbivorous insects should be more abundant in large patches of host plants, because the insects are more likely to find and stay longer in those patches. Between August 1989 and January 1990 we experimentally tested Root's hypothesis by analyzing the numerical response of four species of herbivorous insects associated with patches of 4, 16, 64 and 225 cabbage plants, Brassica oleracea var. capitata. In addition, we studied the colonization of patches by adults of Plutella xylostella (L.) (Lepidoptera: Plutellidae), and the migration of their larvae in patches of different sizes. No herbivorous insect densities differed significantly with patch size. Adults of P. xylostella colonized all kind of patches equally. Larvae did not migrate between patches, and their disappearance rate did not differ between patches. The resource concentration hypothesis is organism-dependent, being a function of the adult and juvenile herbivore dispersal behavior in relation to the spatial scale of patchiness.
Hyperspectral techniques in analysis of oral dosage forms.
Hamilton, Sara J; Lowell, Amanda E; Lodder, Robert A
2002-10-01
Pharmaceutical oral dosage forms are used in this paper to test the sensitivity and spatial resolution of hyperspectral imaging instruments. The first experiment tested the hypothesis that a near-infrared (IR) tunable diode-based remote sensing system is capable of monitoring degradation of hard gelatin capsules at a relatively long distance (0.5 km). Spectra from the capsules were used to differentiate among capsules exposed to an atmosphere containing 150 ppb formaldehyde for 0, 2, 4, and 8 h. Robust median-based principal component regression with Bayesian inference was employed for outlier detection. The second experiment tested the hypothesis that near-IR imaging spectrometry of tablets permits the identification and composition of multiple individual tablets to be determined simultaneously. A near-IR camera was used to collect thousands of spectra simultaneously from a field of blister-packaged tablets. The number of tablets that a typical near-IR camera can currently analyze simultaneously was estimated to be approximately 1300. The bootstrap error-adjusted single-sample technique chemometric-imaging algorithm was used to draw probability-density contour plots that revealed tablet composition. The single-capsule analysis provides an indication of how far apart the sample and instrumentation can be and still maintain adequate signal-to-noise ratio (S/N), while the multiple-tablet imaging experiment gives an indication of how many samples can be analyzed simultaneously while maintaining an adequate S/N and pixel coverage on each sample.
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
Problems with the Younger Dryas Boundary (YDB) Impact Hypothesis
NASA Astrophysics Data System (ADS)
Boslough, M.
2009-12-01
One breakthrough of 20th-century Earth science was the recognition of impacts as an important geologic process. The most obvious result is a crater. There are more than 170 confirmed terrestrial impact structures with a non-uniform spatial distribution suggesting more to be found. Many have been erased by tectonics and erosion. Deep water impacts do not form craters, and craters in ice sheets disappear when the ice melts. There is growing speculation that such hidden impacts have caused frequent major environmental events of the Holocene, but this is inconsistent with the astronomically-constrained population of Earth-crossing asteroids. Impacts can have consequences much more significant than excavation of a crater. The K/T boundary mass extinction is attributed to the environmental effects of a major impact, and some researchers argue that other extinctions, abrupt climate changes, and even civilization collapses have resulted from impacts. Nuclear winter models suggest that 2-km diameter asteroids exceed a "global catastrophe threshold" by injecting sufficient dust into the stratosphere to cause short-term climate changes, but would not necessarily collapse most natural ecosystems or cause mass extinctions. Globally-catastrophic impacts recur on timescales of about one million years. The 1994 collision of Comet Shoemaker-Levy 9 with Jupiter led us recognize the significance of terrestrial airbursts caused by objects exploding violently in Earth’s atmosphere. We have invoked airbursts to explain rare forms of non-volcanic glasses and melts by using high-resolution computational models to improve our understanding of atmospheric explosions, and have suggested that multiple airbursts from fragmented impactors could be responsible for regional effects. Our models have been cited in support of the widely-publicized YDB impact hypothesis. Proponents claim that a broken comet exploded over North America, with some fragments cratering the Laurentide Ice Sheet. They suggest an abrupt climate change caused by impact-triggered meltwater forcing, along with massive wildfires, resulted in megafaunal extinctions and collapse of the Clovis culture. We argue that the physics of fragmentation, dispersion, and airburst is not consistent with the hypothesis; that observations are no more compatible with impact than with other causes; and that the probability of the scenario is effectively nil. Moreover, millennial-scale climate events are far more frequent than catastrophic impacts, and pose a much greater threat to humanity. Sandia is a multiprogram laboratory operated by Sandia Corp, a Lockheed Martin Company, for the US DOE under Contract DE-AC04-94AL85000. Probability density for largest asteroid impact since Last Glacial Maximum based on power-law size distribution. Comets are orders of magnitude less likely. Grazing trajectory or recent fragmentation further reduces probability.
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
NASA Astrophysics Data System (ADS)
Mestan, J.; Alvarez Polanco, E. I.
2014-12-01
Ultrasound is a form of mechanical energy with a frequency greater than ≈ 20 kHz (upper human hearing limit). It is used in many scientific as well as industrial fields. Most modern applications of ultrasound utilize sources which are either piezoelectric or magnetostrictive (Benwell et Bly 1987). A meteorite impact has been considered to be an ultrasound source during last years (Rajlich 2011). Rajlich (2014) is coming with a hypothesis that white planes made of microcavities in Bohemian quartz have their origin in an impact-related ultrasonic sounding. The Bohemian Massif has been considered to be one of the largest impact craters in whole of the world (Papagiannis et El-Baz 1988, Papagiannis 1989, Rajlich 2014). Rajlich's hypothesis implies a liquid behavior of quartz during the impact event. We state that then there have to exist planes of slightly higher density than their surroundings together with planes of microcavities. They should intersect each other without mutual influencing (as in a case of planes made of microcavities). Because physics of ultrasound during an impact event is a brand new and unknown field, we try to choose a simple way of its cognition. It is possible to take the sine wave and set 3 requirements. (1) There exist some surroundings of points of peak amplitudes. (2) These surroundings are of higher density (compression) or lower density (rarefaction) than the mean density of quartz. (3) The difference between the higher/lower and surrounding density is measurable. There was done an experimental study of Bohemian quartz using QCT bone densitometry at the Radiology Munich. Quartz with a size of ≈ 5 x 8 cm absorbed too much RTG radiation (kV 140, mAs 330), which made a picture of internal structure impossible. We propose another techniques and appeal to other scientists to face this challenge. If Bohemian quartz has a harmonically distributed density, we consider it to be a support for Rajlich's hypothesis. AcknowledgementsWe would like to thank to Prof. Martin Mack for the fact he enabled an irradiation of quartz sample at the Radiology Munich.
Spatio-volumetric hazard estimation in the Auckland volcanic field
NASA Astrophysics Data System (ADS)
Bebbington, Mark S.
2015-05-01
The idea of a volcanic field `boundary' is prevalent in the literature, but ill-defined at best. We use the elliptically constrained vents in the Auckland Volcanic Field to examine how spatial intensity models can be tested to assess whether they are consistent with such features. A means of modifying the anisotropic Gaussian kernel density estimate to reflect the existence of a `hard' boundary is then suggested, and the result shown to reproduce the observed elliptical distribution. A new idea, that of a spatio-volumetric model, is introduced as being more relevant to hazard in a monogenetic volcanic field than the spatiotemporal hazard model due to the low temporal rates in volcanic fields. Significant dependencies between the locations and erupted volumes of the observed centres are deduced, and expressed in the form of a spatially-varying probability density. In the future, larger volumes are to be expected in the `gaps' between existing centres, with the location of the greatest forecast volume lying in the shipping channel between Rangitoto and Castor Bay. The results argue for tectonic control over location and magmatic control over erupted volume. The spatio-volumetric model is consistent with the hypothesis of a flat elliptical area in the mantle where tensional stresses, related to the local tectonics and geology, allow decompressional melting.
Redundancy and reduction: Speakers manage syntactic information density
Florian Jaeger, T.
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2012-01-01
We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.
ERIC Educational Resources Information Center
Vinson, R. B.
In this report, the author suggests changes in the treatment of overhead costs by hypothesizing that "the effectiveness of standard costing in planning and controlling overhead costs can be increased through the use of probability theory and associated statistical techniques." To test the hypothesis, the author (1) presents an overview of the…
2014-01-01
Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900
Testing fundamental ecological concepts with a Pythium-Prunus pathosystem
USDA-ARS?s Scientific Manuscript database
The study of plant-pathogen interactions has enabled tests of basic ecological concepts on plant community assembly (Janzen-Connell Hypothesis) and plant invasion (Enemy Release Hypothesis). We used a field experiment to (#1) test whether Pythium effects depended on host (seedling) density and/or d...
Shepstone, L; Fordham, R; Lenaghan, E; Harvey, I; Cooper, C; Gittoes, N; Heawood, A; Peters, T J; O'Neill, T; Torgerson, D; Holland, R; Howe, A; Marshall, T; Kanis, J A; McCloskey, E
2012-10-01
SCOOP is a UK seven-centre, pragmatic, randomised controlled trial with 5-year follow-up, including 11,580 women aged 70 to 85 years, to assess the effectiveness and cost-effectiveness of a community-based screening programme to reduce fractures. It utilises the FRAX algorithm and DXA to assess the 10-year probability of fracture. Introduction Osteoporotic, or low-trauma, fractures present a considerable burden to the National Health Service and have major adverse effects on quality of life, disability and mortality for the individual. Methods Given the availability of efficacious treatments and a risk assessment tool based upon clinical risk factors and bone mineral density, a case exists to undertake a community-based controlled evaluation of screening for subjects at high risk of fracture, under the hypothesis that such a screening programme would reduce fractures in this population. Results This study is a UK seven-centre, unblinded, pragmatic, randomised controlled trial with a 5-year follow-up period. A total of 11,580 women, aged 70 to 85 years and not on prescribed bone protective therapy will be consented to the trial by post via primary care providing 90% power to detect an 18% decrease in fractures. Conclusions Participants will be randomised to either a screening arm or control. Those undergoing screening will have a 10-year fracture probability computed from baseline risk factors together with bone mineral density measured by DXA in selected subjects. Individuals above an age-dependent threshold of fracture probability will be recommended for treatment for the duration of the trial. Subjects in the control arm will receive 'usual care'. Participants will be followed up 6 months after randomisation and annually by postal questionnaires with independent checking of hospital and primary care records. The primary outcome will be the proportion of individuals sustaining fractures in each group. An economic analysis will be carried out to assess cost-effectiveness of screening. A qualitative evaluation will be conducted to examine the acceptability of the process to participants.
Maestlin's teaching of Copernicus. The evidence of his university textbook and disputations.
NASA Astrophysics Data System (ADS)
Methuen, C.
1996-06-01
Michael Maestlin (1550 - 1631), professor of mathematics at the University of Tübingen from 1584 until his death, is probably best known as the teacher of Johannes Kepler. As such he has merited more attention from historians than most other sixteenth-century German professors of mathematics. While Maestlin's own achievements (for instance, his correct description of earthshine, his observation and identification of the nova of 1572, and his attempt to determine the orbit of the comet of 1577 - 1578) have been noted, Kepler's testimony that he learned the Copernican system from Maestlin has meant that attention has been focused on Maestlin's attitude toward the Copernican hypothesis. Central to the ensuing discussion has been the question, and particularly the light shed on it by such of Maestlin's teaching materials as survive, that forms the subject of this paper. Copernicus's planetary hypothesis was probably not taught to every student in the University of Tübingen, but Maestlin seems always to have made the new hypothesis available who had the interest and ability to pursue it.
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
A note on the IQ of monozygotic twins raised apart and the order of their birth.
Pencavel, J H
1976-10-01
This note examines James Shields' sample of monozygotic twins raised apart to entertain the hypothesis that there is a significant association between the measured IQ of these twins and the order of their birth. A non-parametric test supports this hypothesis and then a linear probability function is estimated that discriminates the effects on IQ of birth order from the effects of birth weight.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Structure Function Scaling Exponent and Intermittency in the Wake of a Wind Turbine Array
NASA Astrophysics Data System (ADS)
Aseyev, Aleksandr; Ali, Naseem; Cal, Raul
2015-11-01
Hot-wire measurements obtained in a 3 × 3 wind turbine array boundary layer are utilized to analyze high order structure functions, intermittency effects as well as the probability density functions of velocity increments at different scales within the energy cascade. The intermittency exponent is found to be greater in the far wake region in comparison to the near wake. At hub height, the intermittency exponent is found to be null. ESS scaling exponents of the second, fourth, and fifth order structure functions remain relatively constant as a function of height in the far-wake whereas in the near-wake these highly affected by the passage of the rotor thus showing a dependence on physical location. When comparing with proposed models, these generally over predict the structure functions in the far wake region. The pdf distributions in the far wake region display wider tails compared to the near wake region, and constant skewness hypothesis based on the local isotropy is verified in the wake. CBET-1034581.
Nature of the Kirkwood gaps in the asteroid belt
NASA Technical Reports Server (NTRS)
Dermott, S. F.; Murray, C. D.
1983-01-01
It is demonstrated that the Kirkwood gaps are not merely regions of low asteroidal number density, but are regions in a-e-sin 1/2 I space where libration of some argument is possible. It is argued that neither the statistical nor the cosmogonic hypothesis of gap formation can account for these new observations. It is shown that the present distribution of asteroidal semimajor axes can be used to deduce the present semimajor axis of Jupiter to an accuracy of one part in five thousand. Thus, there has been very little change in the orbital period of Jupiter since the time of formation of the present gaps. This observation eliminates the possibility that the observed gaps were formed by resonance sweeping at the time of the dispersal of the accretion disk. It is concluded that the gaps have been formed by the gravitational action of Jupiter on individual asteroids and that gap formation has probably continued throughout the lifetime of the solar system.
Multi-Target State Extraction for the SMC-PHD Filter
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-01-01
The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274
Information entropy and dark energy evolution
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; Luongo, Orlando
Here, the information entropy is investigated in the context of early and late cosmology under the hypothesis that distinct phases of universe evolution are entangled between them. The approach is based on the entangled state ansatz, representing a coarse-grained definition of primordial dark temperature associated to an effective entangled energy density. The dark temperature definition comes from assuming either Von Neumann or linear entropy as sources of cosmological thermodynamics. We interpret the involved information entropies by means of probabilities of forming structures during cosmic evolution. Following this recipe, we propose that quantum entropy is simply associated to the thermodynamical entropy and we investigate the consequences of our approach using the adiabatic sound speed. As byproducts, we analyze two phases of universe evolution: the late and early stages. To do so, we first recover that dark energy reduces to a pure cosmological constant, as zero-order entanglement contribution, and second that inflation is well-described by means of an effective potential. In both cases, we infer numerical limits which are compatible with current observations.
Litterfall mercury deposition in Atlantic forest ecosystem from SE-Brazil.
Teixeira, Daniel C; Montezuma, Rita C; Oliveira, Rogério R; Silva-Filho, Emmanoel V
2012-05-01
Litterfall is believed to be the major flux of Hg to soils in forested landscapes, yet much less is known about this input on tropical environment. The Hg litterfall flux was measured during one year in Atlantic Forest fragment, located within Rio de Janeiro urban perimeter, in the Southeastern region of Brazil. The results indicated a mean annual Hg concentration of 238 ± 52 ng g(-1) and a total annual Hg deposition of 184 ± 8.2 μg m(-2) y(-1). The negative correlation observed between rain precipitation and Hg concentrations is probably related to the higher photosynthetic activity observed during summer. The total Hg concentration in leaves from the most abundant species varied from 60 to 215 ng g(-1). Hg concentration showed a positive correlation with stomatal and trichomes densities. These characteristics support the hypothesis that Tropical Forest is an efficient mercury sink and litter plays a key role in Hg dynamics. Copyright © 2011 Elsevier Ltd. All rights reserved.
Riabchenko, A S; Avetisian, T V; Babosha, A V
2009-01-01
Scanning electronic microscopy was used to investigate the regularities of growth direction of infectious structures and colonies of the agent of powdery mildew of wheat Erysiphe graminis f. sp. tritici. The growth of appressoria with normal morphology in wheat leaves occurs predominantly along the long axis of the cell. Most anomalous appressoria grow perpendicularly. Treatment with zeatin changes the ratio of the directions of growth of normal appressoria and hyphae of the colonies. The dependence of these parameters and of the surficial density of colonies on the concentration of phytohormone is monophasic. The hypothesis is suggested that the strategy of selection of the direction of growth of infectious structures on leaves with an anisotropic surface depends on the most probable position of the receptor cell and the action of cytokinins on their participation in redistribution of nutrients between the infected and noninfected cells of the host plant.
Gray, Charles A.
2016-01-01
Management responses to reconcile declining fisheries typically include closed areas and times to fishing. This study evaluated this strategy for a beach clam fishery by testing the hypothesis that changes in the densities and size compositions of clams from before to during harvesting would differ between commercially fished and non-fished beaches. Sampling was spatially stratified across the swash and dry sand habitats on each of two commercially fished and two non-fished beaches, and temporally stratified across three six-week blocks: before, early and late harvesting. Small-scale spatio-temporal variability in the densities and sizes of clams was prevalent across both habitats and the components of variation were generally greatest at the lowest levels examined. Despite this, differences in the densities and sizes of clams among individual beaches were evident, but there were few significant differences across the commercially fished versus non-fished beaches from before to during harvesting. There was no evidence of reduced densities or truncated size compositions of clams on fished compared to non-fished beaches, contrasting reports of some other organisms in protected areas. This was probably due to a combination of factors, including the current levels of commercial harvests, the movements and other local-scale responses of clams to ecological processes acting independently across individual beaches. The results identify the difficulties in detecting fishing-related impacts against inherent levels of variability in clam populations. Nevertheless, continued experimental studies that test alternate management arrangements may help refine and determine the most suitable strategies for the sustainable harvesting of beach clams, ultimately enhancing the management of sandy beaches. PMID:26731102
Gray, Charles A
2016-01-01
Management responses to reconcile declining fisheries typically include closed areas and times to fishing. This study evaluated this strategy for a beach clam fishery by testing the hypothesis that changes in the densities and size compositions of clams from before to during harvesting would differ between commercially fished and non-fished beaches. Sampling was spatially stratified across the swash and dry sand habitats on each of two commercially fished and two non-fished beaches, and temporally stratified across three six-week blocks: before, early and late harvesting. Small-scale spatio-temporal variability in the densities and sizes of clams was prevalent across both habitats and the components of variation were generally greatest at the lowest levels examined. Despite this, differences in the densities and sizes of clams among individual beaches were evident, but there were few significant differences across the commercially fished versus non-fished beaches from before to during harvesting. There was no evidence of reduced densities or truncated size compositions of clams on fished compared to non-fished beaches, contrasting reports of some other organisms in protected areas. This was probably due to a combination of factors, including the current levels of commercial harvests, the movements and other local-scale responses of clams to ecological processes acting independently across individual beaches. The results identify the difficulties in detecting fishing-related impacts against inherent levels of variability in clam populations. Nevertheless, continued experimental studies that test alternate management arrangements may help refine and determine the most suitable strategies for the sustainable harvesting of beach clams, ultimately enhancing the management of sandy beaches.
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Car accidents induced by a bottleneck
NASA Astrophysics Data System (ADS)
Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid
2017-12-01
Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.
Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination
Sinkkonen, Aki
2005-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163
Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination
Sinkkonen, Aki
2006-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Myers, Samuel M.; Modine, Normand A.
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
ERIC Educational Resources Information Center
Rispens, Judith; Baker, Anne; Duinmeijer, Iris
2015-01-01
Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…
A test of the size-constraint hypothesis for a limit to sexual dimorphism in plants.
Labouche, Anne-Marie; Pannell, John R
2016-07-01
In flowering plants, many dioecious species display a certain degree of sexual dimorphism in non-reproductive traits, but this dimorphism tends to be much less striking than that found in animals. Sexual size dimorphism in plants may be limited because competition for light in crowded environments so strongly penalises small plants. The idea that competition for light constrains the evolution of strong sexual size dimorphism in plants (the size-constraint hypothesis) implies a strong dependency of the expression of sexual size dimorphism on the neighbouring density as a result of the capacity of plants to adjust their reproductive effort and investment in growth in response to their local environment. Here, we tested this hypothesis by experimentally altering the context of competition for light among male-female pairs of the light-demanding dioecious annual plant Mercurialis annua. We found that males were smaller than females across all treatments, but sexual size dimorphism was diminished for pairs grown at higher densities. This result is consistent with the size-constraint hypothesis. We discuss our results in terms of the tension between selection on size acting in opposite directions on males and females, which have different optima under sexual selection, and stabilizing selection for similar sizes in males and females, which have similar optima under viability selection for plasticity in size expression under different density conditions.
[Lifestyle and probabilty of dementia in the elderly].
León-Ortiz, Pablo; Ruiz-Flores, Manuel Leonardo; Ramírez-Bermúdez, Jesús; Sosa-Ortiz, Ana Luisa
2013-01-01
there is evidence of a relationship between physical and cognitive activity and the development of dementia, although this hypothesis has not been tested in Mexican population. analyze the association between an increased participation in physical and cognitive activities and the probability of having dementia, using a Mexican open population sample. we made a cross sectional survey in open Mexican population of residents in urban and rural areas of 65 of age and older; we performed cognitive assessments to identify subjects with dementia, as well as questionnaires to assess the level of participation in physical and cognitive activities. We performed a binary logistic regression analysis to establish the association between participation and the probability of having dementia. we included 2003 subjects, 180 with diagnosis of dementia. Subjects with dementia were older, had less education and higher prevalence of some chronic diseases. The low participation in cognitive activities was associated with a higher probability of developing dementia. Patients with dementia had significantly lower scores on physical activity scales. this study supports the hypothesis of a relationship between low cognitive and physical activity and the presentation of dementia.
Gamma Strength Functions and Level Densities from 300 MeV Proton Scatttering at 0°
NASA Astrophysics Data System (ADS)
von Neumann-Cosel, Peter; Bassauer, Sergej; Martin, Dirk
The gamma strength function (GSF) as well as total level densities (LDs) in 208Pb and 96Mo were extracted from high-resolution forward angle inelastic proton scattering data taken at RCNP, Osaka, Japan, and compared to experimental results obtained with the Oslo method in order to test the validity of the Brink-Axel (BA) hypothesis in the energy region of the pygmy dipole resonance. The case of 208Pb is inconclusive because of strong fluctuations of the GSF due to the small level density in a doubly closed-shell nucleus. In 96Mo the data are consistent with the BA hypothesis. The good agreement of LDs provides an independent confirmation of the approach underlying the decomposition of GSF and LDs in Oslo-type experiments.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Estimating the Probability of Traditional Copying, Conditional on Answer-Copying Statistics.
Allen, Jeff; Ghattas, Andrew
2016-06-01
Statistics for detecting copying on multiple-choice tests produce p values measuring the probability of a value at least as large as that observed, under the null hypothesis of no copying. The posterior probability of copying is arguably more relevant than the p value, but cannot be derived from Bayes' theorem unless the population probability of copying and probability distribution of the answer-copying statistic under copying are known. In this article, the authors develop an estimator for the posterior probability of copying that is based on estimable quantities and can be used with any answer-copying statistic. The performance of the estimator is evaluated via simulation, and the authors demonstrate how to apply the formula using actual data. Potential uses, generalizability to other types of cheating, and limitations of the approach are discussed.
The Hypothesis-Driven Physical Examination.
Garibaldi, Brian T; Olson, Andrew P J
2018-05-01
The physical examination remains a vital part of the clinical encounter. However, physical examination skills have declined in recent years, in part because of decreased time at the bedside. Many clinicians question the relevance of physical examinations in the age of technology. A hypothesis-driven approach to teaching and practicing the physical examination emphasizes the performance of maneuvers that can alter the likelihood of disease. Likelihood ratios are diagnostic weights that allow clinicians to estimate the post-probability of disease. This hypothesis-driven approach to the physical examination increases its value and efficiency, while preserving its cultural role in the patient-physician relationship. Copyright © 2017 Elsevier Inc. All rights reserved.
Fractional Brownian motion with a reflecting wall
NASA Astrophysics Data System (ADS)
Wada, Alexander H. O.; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior
NASA Astrophysics Data System (ADS)
Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah
2018-01-01
The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.
Effects of heterogeneous traffic with speed limit zone on the car accidents
NASA Astrophysics Data System (ADS)
Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.
2016-06-01
Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Effects of high density on spacing behaviour and reproduction in Akodon azarae: A fencing experiment
NASA Astrophysics Data System (ADS)
Ávila, Belén; Bonatto, Florencia; Priotto, José; Steinmann, Andrea R.
2016-01-01
We studied the short term spacing behavioural responses of Pampean grassland mouse (Akodon azarae) with regard to population density in four 0.25 ha enclosures (two control and two experimental) in the 2011 breeding season. Based on the hypothesis that A. azarae breeding females exhibit spacing behaviour, and breeding males show a fusion spatial response, we tested the following predictions: (1) home range size and intrasexual overlap degree of females are independent of population density values; (2) at high population density, home range size of males decreases and the intrasexual home range overlap degree increases. To determine if female reproductive success decreases at high population density, we analyzed pregnancy rate, size and weight of litters, and period until fecundation in both low and high enclosure population density. We found that both males and females varied their home range size in relation to population density. Although male home ranges were always bigger than those of females in populations with high density, home range sizes of both sexes decreased. Females kept exclusive home ranges independent of density values meanwhile males decreased home range overlap in high breeding density populations. Although females produced litters of similar size in both treatments, weight of litter, pregnant rate and period until fecundation varied in relation to population density. Our results did not support the hypothesis that at high density females of A. azarae exhibit spacing behaviour neither that males exhibit a fusion spatial response.
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
Spiegelhalter, D J; Freedman, L S
1986-01-01
The 'textbook' approach to determining sample size in a clinical trial has some fundamental weaknesses which we discuss. We describe a new predictive method which takes account of prior clinical opinion about the treatment difference. The method adopts the point of clinical equivalence (determined by interviewing the clinical participants) as the null hypothesis. Decision rules at the end of the study are based on whether the interval estimate of the treatment difference (classical or Bayesian) includes the null hypothesis. The prior distribution is used to predict the probabilities of making the decisions to use one or other treatment or to reserve final judgement. It is recommended that sample size be chosen to control the predicted probability of the last of these decisions. An example is given from a multi-centre trial of superficial bladder cancer.
Extended target recognition in cognitive radar networks.
Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin
2010-01-01
We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Making inference from wildlife collision data: inferring predator absence from prey strikes
Hosack, Geoffrey R.; Barry, Simon C.
2017-01-01
Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application. PMID:28243534
The small low SNR target tracking using sparse representation information
NASA Astrophysics Data System (ADS)
Yin, Lifan; Zhang, Yiqun; Wang, Shuo; Sun, Chenggang
2017-11-01
Tracking small targets, such as missile warheads, from a remote distance is a difficult task since the targets are "points" which are similar to sensor's noise points. As a result, traditional tracking algorithms only use the information contained in point measurement, such as the position information and intensity information, as characteristics to identify targets from noise points. But in fact, as a result of the diffusion of photon, any small target is not a point in the focal plane array and it occupies an area which is larger than one sensor cell. So, if we can take the geometry characteristic into account as a new dimension of information, it will be of helpful in distinguishing targets from noise points. In this paper, we use a novel method named sparse representation (SR) to depict the geometry information of target intensity and define it as the SR information of target. Modeling the intensity spread and solving its SR coefficients, the SR information is represented by establishing its likelihood function. Further, the SR information likelihood is incorporated in the conventional Probability Hypothesis Density (PHD) filter algorithm with point measurement. To illustrate the different performances of algorithm with or without the SR information, the detection capability and estimation error have been compared through simulation. Results demonstrate the proposed method has higher estimation accuracy and probability of detecting target than the conventional algorithm without the SR information.
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2007-02-02
glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury
Electrofishing capture probability of smallmouth bass in streams
Dauwalter, D.C.; Fisher, W.L.
2007-01-01
Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Denton, Ellen-Ge D; Shaffer, Jonathan A; Alcantara, Carmela; Cadermil, Esteban
2016-02-01
The Ethnic Density hypothesis posits that living around others from similar ethnic backgrounds reduces the risk of adverse mental health outcomes such as depression. Contrary to this hypothesis, previous work has shown that Hispanic ethnic density is cross-sectionally associated with increased depressive symptom severity among patients hospitalized with an acute coronary syndrome (ACS; myocardial infarction or unstable angina pectoris). To date, no study has examined the prospective association of Hispanic ethnic density on long-term depressive symptom severity following an acute medical event. We prospectively assessed the impact of Hispanic ethnic density on depressive symptoms, 1-year following an ACS event, among Hispanic adult patients. We tested the non-linear association between ethnic density and depressive symptoms to account for inconsistent findings on the ethnic density hypothesis. At the time of an index ACS event (i.e., baseline, N = 326) and 1-year later (N = 252), Hispanic patients from the Prescription Usage, Lifestyle, and Stress Evaluation prospective cohort study completed the Beck Depression Inventory as a measure of depressive symptom severity. Hispanic ethnic density was defined by the percentage of Hispanic residents within each patient's census tract using data extracted from the American Community Survey Census (2010-2013). Covariates included baseline demographic factors (age, gender, English fluency, education, nativity status), cardiovascular factors (Charlson comorbidity index, left ventricular ejection fraction, Global Registry of Acute Coronary Events 6-month prognostic risk score), and neighborhood factors (residential density, income, and percentage of households receiving public assistance). In an adjusted multivariable linear regression analysis there was a significant curvilinear association between Hispanic ethnic density and depressive symptom severity at 1 year. As Hispanic ethnic density increased from low to moderate density, there was an increase in depressive symptoms, but depressive symptoms slightly declined in census tracts with the highest density of Hispanics. Furthermore, gender significantly moderated the relation between Hispanic ethnic density and 1-year depressive symptom severity, such that Hispanic ethnic density was significantly associated with increased depressive symptom severity for female Hispanic patients with ACS, but not for male Hispanic patients. Previous research suggests that ethnic density may be protective against depression in Hispanic enclaves; however, our findings suggest a non-linear ethnic density effect and an overall more complex association between ethnic density and depression. These data add to a growing body of literature on the effects of sociodemographic and contextual factors on health.
Denton, Ellen-ge D.; Shaffer, Jonathan A.; Alcantara, Carmela; Cadermil, Esteban
2015-01-01
The Ethnic Density hypothesis posits that living around others from similar ethnic backgrounds reduces the risk of adverse mental health outcomes such as depression. Contrary to this hypothesis, previous work has shown that Hispanic ethnic density is cross-sectionally associated with increased depressive symptom severity among patients hospitalized with an acute coronary syndrome (ACS; myocardial infarction or unstable angina pectoris). To date, no study has examined the prospective association of Hispanic ethnic density on long-term depressive symptom severity following an acute medical event. We prospectively assessed the impact of Hispanic ethnic density on depressive symptoms, 1-year following an ACS event, among Hispanic adult patients. We tested the non-linear association between ethnic density and depressive symptoms to account for inconsistent findings on the ethnic density hypothesis. At the time of an index ACS event (i.e., baseline, N = 326) and 1-year later (N = 252), Hispanic patients from the Prescription Usage, Lifestyle, and Stress Evaluation prospective cohort study completed the Beck Depression Inventory as a measure of depressive symptom severity. Hispanic ethnic density was defined by the percentage of Hispanic residents within each patient's census tract using data extracted from the American Community Survey Census (2010–2013). Covariates included baseline demographic factors (age, gender, English fluency, education, nativity status), cardiovascular factors (Charlson comorbidity index, left ventricular ejection fraction, Global Registry of Acute Coronary Events 6-month prognostic risk score), and neighborhood factors (residential density, income, and percentage of households receiving public assistance). In an adjusted multivariable linear regression analysis there was a significant curvilinear association between Hispanic ethnic density and depressive symptom severity at 1 year. As Hispanic ethnic density increased from low to moderate density, there was an increase in depressive symptoms, but depressive symptoms slightly declined in census tracts with the highest density of Hispanics. Furthermore, gender significantly moderated the relation between Hispanic ethnic density and 1-year depressive symptom severity, such that Hispanic ethnic density was significantly associated with increased depressive symptom severity for female Hispanic patients with ACS, but not for male Hispanic patients. Previous research suggests that ethnic density may be protective against depression in Hispanic enclaves; however, our findings suggest a non-linear ethnic density effect and an overall more complex association between ethnic density and depression. These data add to a growing body of literature on the effects of sociodemographic and contextual factors on health. PMID:26407692
Paleoindian demography and the extraterrestrial impact hypothesis
Buchanan, Briggs; Collard, Mark; Edinborough, Kevan
2008-01-01
Recently it has been suggested that one or more large extraterrestrial (ET) objects struck northern North America 12,900 ± 100 calendar years before present (calBP) [Firestone RB, et al. (2007) Proc Natl Acad Sci USA 104: 16016–16021]. This impact is claimed to have triggered the Younger Dryas major cooling event and resulted in the extinction of the North American megafauna. The impact is also claimed to have caused major cultural changes and population decline among the Paleoindians. Here, we report a study in which ≈1,500 radiocarbon dates from archaeological sites in Canada and the United States were used to test the hypothesis that the ET resulted in population decline among the Paleoindians. Following recent studies [e.g., Gamble C, Davies W, Pettitt P, Hazelwood L, Richards M (2005) Camb Archaeol J 15:193–223), the summed probability distribution of the calibrated dates was used to identify probable changes in human population size between 15,000 and 9,000 calBP. Subsequently, potential biases were evaluated by modeling and spatial analysis of the dated occupations. The results of the analyses were not consistent with the predictions of extraterrestrial impact hypothesis. No evidence of a population decline among the Paleoindians at 12,900 ± 100 calBP was found. Thus, minimally, the study suggests the extraterrestrial impact hypothesis should be amended. PMID:18697936
The late Neandertal supraorbital fossils from Vindija Cave, Croatia: a biased sample?
Ahern, James C M; Lee, Sang-Hee; Hawks, John D
2002-09-01
The late Neandertal sample from Vindija (Croatia) has been described as transitional between the earlier Central European Neandertals from Krapina (Croatia) and modern humans. However, the morphological differences indicating this transition may rather be the result of different sex and/or age compositions between the samples. This study tests the hypothesis that the metric differences between the Krapina and Vindija supraorbital samples are due to sampling bias. We focus upon the supraorbital region because past studies have posited this region as particularly indicative of the Vindija sample's transitional nature. Furthermore, the supraorbital region varies significantly with both age and sex. We analyzed four chords and two derived indices of supraorbital torus form as defined by Smith & Ranyard (1980, Am. J. phys. Anthrop.93, pp. 589-610). For each variable, we analyzed relative sample bias of the Krapina and Vindija samples using three sampling methods. In order to test the hypothesis that the Vindija sample contains an over-representation of females and/or young while the Krapina sample is normal or also female/young biased, we determined the probability of drawing a sample of the same size as and with a mean equal to or less than Vindija's from a Krapina-based population. In order to test the hypothesis that the Vindija sample is female/young biased while the Krapina sample is male/old biased, we determined the probability of drawing a sample of the same size as and with a mean equal or less than Vindija's from a generated population whose mean is halfway between Krapina's and Vindija's. Finally, in order to test the hypothesis that the Vindija sample is normal while the Krapina sample contains an over-representation of males and/or old, we determined the probability of drawing a sample of the same size as and with a mean equal to or greater than Krapina's from a Vindija-based population. Unless we assume that the Vindija sample is female/young and the Krapina sample is male/old biased, our results falsify the hypothesis that the metric differences between the Krapina and Vindija samples are due to sample bias.
Integrating resource selection information with spatial capture--recapture
Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.
2013-01-01
4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.
ERIC Educational Resources Information Center
Gray, Shelley; Pittman, Andrea; Weinhold, Juliet
2014-01-01
Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…
ERIC Educational Resources Information Center
van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.
2016-01-01
The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…
Exploring the relationship between population density and maternal health coverage.
Hanlon, Michael; Burstein, Roy; Masters, Samuel H; Zhang, Raymond
2012-11-21
Delivering health services to dense populations is more practical than to dispersed populations, other factors constant. This engenders the hypothesis that population density positively affects coverage rates of health services. This hypothesis has been tested indirectly for some services at a local level, but not at a national level. We use cross-sectional data to conduct cross-country, OLS regressions at the national level to estimate the relationship between population density and maternal health coverage. We separately estimate the effect of two measures of density on three population-level coverage rates (6 tests in total). Our coverage indicators are the fraction of the maternal population completing four antenatal care visits and the utilization rates of both skilled birth attendants and in-facility delivery. The first density metric we use is the percentage of a population living in an urban area. The second metric, which we denote as a density score, is a relative ranking of countries by population density. The score's calculation discounts a nation's uninhabited territory under the assumption those areas are irrelevant to service delivery. We find significantly positive relationships between our maternal health indicators and density measures. On average, a one-unit increase in our density score is equivalent to a 0.2% increase in coverage rates. Countries with dispersed populations face higher burdens to achieve multinational coverage targets such as the United Nations' Millennial Development Goals.
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
Assessing hypotheses about nesting site occupancy dynamics
Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle
2011-01-01
Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.
Goodman and Kruskal's TAU-B Statistics: A Fortran-77 Subroutine.
ERIC Educational Resources Information Center
Berry, Kenneth J.; Mielke, Paul W., Jr.
1986-01-01
An algorithm and associated FORTRAN-77 computer subroutine are described for computing Goodman and Kruskal's tau-b statistic along with the associated nonasymptotic probability value under the null hypothesis tau=O. (Author)
Gary A. Ritchie; James Keeley; Barbara J. Bond
2007-01-01
Coastal Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) seedlings, when planted in a reforestation setting, exhibit early height and diameter growth that is inversely proportional to planting density. One hypothesis to explain this observation is that they are able to detect the presence of nearby trees using phytochrome by sensing the ratio of...
Mercader, R J; Siegert, N W; McCullough, D G
2012-02-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.
On Schrödinger's bridge problem
NASA Astrophysics Data System (ADS)
Friedland, S.
2017-11-01
In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.
Density probability distribution functions of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2008-10-01
In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
NASA Astrophysics Data System (ADS)
Carmichael, J.
2016-12-01
Waveform correlation detectors used in seismic monitoring scan multichannel data to test two competing hypotheses: that data contain (1) a noisy, amplitude-scaled version of a template waveform, or, (2) only noise. In reality, seismic wavefields include signals triggered by non-target sources (background seismicity) and target signals that are only partially correlated with the waveform template. We reform the waveform correlation detector hypothesis test to accommodate deterministic uncertainty in template/target waveform similarity and thereby derive a new detector from convex set projections (the "cone detector") for use in explosion monitoring. Our analyses give probability density functions that quantify the detectors' degraded performance with decreasing waveform similarity. We then apply our results to three announced North Korean nuclear tests and use International Monitoring System (IMS) arrays to determine the probability that low magnitude, off-site explosions can be reliably detected with a given waveform template. We demonstrate that cone detectors provide (1) an improved predictive capability over correlation detectors to identify such spatially separated explosive sources, (2) competitive detection rates, and (3) reduced false alarms on background seismicity. Figure Caption: Observed and predicted receiver operating characteristic curves for correlation statistic r(x) (left) and cone statistic s(x) (right) versus semi-empirical explosion magnitude. a: Shaded region shows range of ROC curves for r(x) that give the predicted detection performance in noise conditions recorded over 24 hrs on 8 October 2006. Superimposed stair plot shows the empirical detection performance (recorded detections/total events) averaged over 24 hr of data. Error bars indicate the demeaned range in observed detection probability over the day; means are removed to avoid risk of misinterpreting range to indicate probabilities can exceed one. b: Shaded region shows range of ROC curves for s(x) that give the predicted detection performance for the cone detector. Superimposed stair plot show observed detection performance averaged over 24 hr of data analogous to that shown in a.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
Influence of weather on low larkspur (Delphinium nuttallianum) density
USDA-ARS?s Scientific Manuscript database
Delphinium nuttallianum (low larkspur) causes serious cattle losses on mountain rangelands in western North America. Risk of cattle deaths is related to density of low larkspurs. Our hypothesis was that warmer winter/spring conditions, coupled with below average precipitation, would result in reduc...
Labor efficiency and intensity of land use in rice production: an example from Kalimantan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padoch, C.
1986-09-01
The ''Boserup hypothesis'' contends that land-intensive systems of agriculture will be adopted only when high population density precludes the use of land-extensive methods. In the Kerayan District of East Kalimantan (Indonesia) the Lun Dayeh practice permanent-field rice cultivation despite very low human densities. An examination of the relative labor efficiencies of shifting and permanent-field agriculture in the Kerayan, as well as of local environmental and historical variables, explains why this ''anomalous'' situation exists. It is argued that since relative success in production of rice by shifting- and permanent-field irrigated methods depends on many natural and social conditions other than levelsmore » of population density, the ''environment-free'' Boserup hypothesis cannot adequately explain or predict the occurrence of particular forms of rice agriculture.« less
Fractional Brownian motion with a reflecting wall.
Wada, Alexander H O; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
The Butterflies of Barro Colorado Island, Panama: Local Extinction since the 1930s.
Basset, Yves; Barrios, Héctor; Segar, Simon; Srygley, Robert B; Aiello, Annette; Warren, Andrew D; Delgado, Francisco; Coronado, James; Lezcano, Jorge; Arizala, Stephany; Rivera, Marleny; Perez, Filonila; Bobadilla, Ricardo; Lopez, Yacksecari; Ramirez, José Alejandro
2015-01-01
Few data are available about the regional or local extinction of tropical butterfly species. When confirmed, local extinction was often due to the loss of host-plant species. We used published lists and recent monitoring programs to evaluate changes in butterfly composition on Barro Colorado Island (BCI, Panama) between an old (1923-1943) and a recent (1993-2013) period. Although 601 butterfly species have been recorded from BCI during the 1923-2013 period, we estimate that 390 species are currently breeding on the island, including 34 cryptic species, currently only known by their DNA Barcode Index Number. Twenty-three butterfly species that were considered abundant during the old period could not be collected during the recent period, despite a much higher sampling effort in recent times. We consider these species locally extinct from BCI and they conservatively represent 6% of the estimated local pool of resident species. Extinct species represent distant phylogenetic branches and several families. The butterfly traits most likely to influence the probability of extinction were host growth form, wing size and host specificity, independently of the phylogenetic relationships among butterfly species. On BCI, most likely candidates for extinction were small hesperiids feeding on herbs (35% of extinct species). However, contrary to our working hypothesis, extinction of these species on BCI cannot be attributed to loss of host plants. In most cases these host plants remain extant, but they probably subsist at lower or more fragmented densities. Coupled with low dispersal power, this reduced availability of host plants has probably caused the local extinction of some butterfly species. Many more bird than butterfly species have been lost from BCI recently, confirming that small preserves may be far more effective at conserving invertebrates than vertebrates and, therefore, should not necessarily be neglected from a conservation viewpoint.
Physical Activity and Change in Mammographic Density
Conroy, Shannon M.; Butler, Lesley M.; Harvey, Danielle; Gold, Ellen B.; Sternfeld, Barbara; Oestreicher, Nina; Greendale, Gail A.; Habel, Laurel A.
2010-01-01
One potential mechanism by which physical activity may protect against breast cancer is by decreasing mammographic density. Percent mammographic density, the proportion of dense breast tissue area to total breast area, declines with age and is a strong risk factor for breast cancer. The authors hypothesized that women who were more physically active would have a greater decline in percent mammographic density with age, compared with less physically active women. The authors tested this hypothesis using longitudinal data (1996–2004) from 722 participants in the Study of Women's Health Across the Nation (SWAN), a multiethnic cohort of women who were pre- and early perimenopausal at baseline, with multivariable, repeated-measures linear regression analyses. During an average of 5.6 years, the mean annual decline in percent mammographic density was 1.1% (standard deviation = 0.1). A 1-unit increase in total physical activity score was associated with a weaker annual decline in percent mammographic density by 0.09% (standard error = 0.03; P = 0.01). Physical activity was inversely associated with the change in nondense breast area (P < 0.01) and not associated with the change in dense breast area (P = 0.17). Study results do not support the hypothesis that physical activity reduces breast cancer through a mechanism that includes reduced mammographic density. PMID:20354074
Evolution of Swarming Behavior Is Shaped by How Predators Attack.
Olson, Randal S; Knoester, David B; Adami, Christoph
2016-01-01
Animal grouping behaviors have been widely studied due to their implications for understanding social intelligence, collective cognition, and potential applications in engineering, artificial intelligence, and robotics. An important biological aspect of these studies is discerning which selection pressures favor the evolution of grouping behavior. In the past decade, researchers have begun using evolutionary computation to study the evolutionary effects of these selection pressures in predator-prey models. The selfish herd hypothesis states that concentrated groups arise because prey selfishly attempt to place their conspecifics between themselves and the predator, thus causing an endless cycle of movement toward the center of the group. Using an evolutionary model of a predator-prey system, we show that how predators attack is critical to the evolution of the selfish herd. Following this discovery, we show that density-dependent predation provides an abstraction of Hamilton's original formulation of domains of danger. Finally, we verify that density-dependent predation provides a sufficient selective advantage for prey to evolve the selfish herd in response to predation by coevolving predators. Thus, our work corroborates Hamilton's selfish herd hypothesis in a digital evolutionary model, refines the assumptions of the selfish herd hypothesis, and generalizes the domain of danger concept to density-dependent predation.
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.
2011-01-01
Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne
2011-01-01
Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.
Randomized path optimization for thevMitigated counter detection of UAVS
2017-06-01
using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We
Korman, Josh; Yard, Mike
2017-01-01
Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.
Wavefronts, actions and caustics determined by the probability density of an Airy beam
NASA Astrophysics Data System (ADS)
Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón
2018-07-01
The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.
Piertney, Stuart B; Lambin, Xavier; Maccoll, Andrew D C; Lock, Kerry; Bacon, Philip J; Dallas, John F; Leckie, Fiona; Mougeot, Francois; Racey, Paul A; Redpath, Steve; Moss, Robert
2008-05-01
Populations of red grouse (Lagopus lagopus scoticus) undergo regular multiannual cycles in abundance. The 'kinship hypothesis' posits that such cycles are caused by changes in kin structure among territorial males producing delayed density-dependent changes in aggressiveness, which in turn influence recruitment and regulate density. The kinship hypothesis makes several specific predictions about the levels of kinship, aggressiveness and recruitment through a population cycle: (i) kin structure will build up during the increase phase of a cycle, but break down prior to peak density; (ii) kin structure influences aggressiveness, such that there will be a negative relationship between kinship and aggressiveness over the years; (iii) as aggressiveness regulates recruitment and density, there will be a negative relationship between aggressiveness in one year and both recruitment and density in the next; (iv) as kin structure influences recruitment via an affect on aggressiveness, there will be a positive relationship between kinship in one year and recruitment the next. Here we test these predictions through the course of an 8-year cycle in a natural population of red grouse in northeast Scotland, using microsatellite DNA markers to resolve changing patterns of kin structure, and supra-orbital comb height of grouse as an index of aggressiveness. Both kin structure and aggressiveness were dynamic through the course of the cycle, and changing patterns were entirely consistent with the expectations of the kinship hypothesis. Results are discussed in relation to potential drivers of population regulation and implications of dynamic kin structure for population genetics.
Covariance hypotheses for LANDSAT data
NASA Technical Reports Server (NTRS)
Decell, H. P.; Peters, C.
1983-01-01
Two covariance hypotheses are considered for LANDSAT data acquired by sampling fields, one an autoregressive covariance structure and the other the hypothesis of exchangeability. A minimum entropy approximation of the first structure by the second is derived and shown to have desirable properties for incorporation into a mixture density estimation procedure. Results of a rough test of the exchangeability hypothesis are presented.
Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.
Guo, Lian; Radisic, Aleksandar; Searson, Peter C
2005-12-22
Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.
A Tutorial in Bayesian Potential Outcomes Mediation Analysis.
Miočević, Milica; Gonzalez, Oscar; Valente, Matthew J; MacKinnon, David P
2018-01-01
Statistical mediation analysis is used to investigate intermediate variables in the relation between independent and dependent variables. Causal interpretation of mediation analyses is challenging because randomization of subjects to levels of the independent variable does not rule out the possibility of unmeasured confounders of the mediator to outcome relation. Furthermore, commonly used frequentist methods for mediation analysis compute the probability of the data given the null hypothesis, which is not the probability of a hypothesis given the data as in Bayesian analysis. Under certain assumptions, applying the potential outcomes framework to mediation analysis allows for the computation of causal effects, and statistical mediation in the Bayesian framework gives indirect effects probabilistic interpretations. This tutorial combines causal inference and Bayesian methods for mediation analysis so the indirect and direct effects have both causal and probabilistic interpretations. Steps in Bayesian causal mediation analysis are shown in the application to an empirical example.
NASA Astrophysics Data System (ADS)
Audenaert, Koenraad M. R.; Mosonyi, Milán
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j
Stewart, Heather; Massoudieh, Arash; Gellis, Allen C.
2015-01-01
A Bayesian chemical mass balance (CMB) approach was used to assess the contribution of potential sources for fluvial samples from Laurel Hill Creek in southwest Pennsylvania. The Bayesian approach provides joint probability density functions of the sources' contributions considering the uncertainties due to source and fluvial sample heterogeneity and measurement error. Both elemental profiles of sources and fluvial samples and 13C and 15N isotopes were used for source apportionment. The sources considered include stream bank erosion, forest, roads and agriculture (pasture and cropland). Agriculture was found to have the largest contribution, followed by stream bank erosion. Also, road erosion was found to have a significant contribution in three of the samples collected during lower-intensity rain events. The source apportionment was performed with and without isotopes. The results were largely consistent; however, the use of isotopes was found to slightly increase the uncertainty in most of the cases. The correlation analysis between the contributions of sources shows strong correlations between stream bank and agriculture, whereas roads and forest seem to be less correlated to other sources. Thus, the method was better able to estimate road and forest contributions independently. The hypothesis that the contributions of sources are not seasonally changing was tested by assuming that all ten fluvial samples had the same source contributions. This hypothesis was rejected, demonstrating a significant seasonal variation in the sources of sediments in the stream.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
Seasonal comparison of aquatic macroinvertebrate assemblages in a flooded coastal freshwater marsh
Kang, Sung-Ryong; King, Sammy L.
2013-01-01
Marsh flooding and drying may be important factors affecting aquatic macroinvertebrate density and distribution in coastal freshwater marshes. Limited availability of water as a result of drying in emergent marsh may decrease density, taxonomic diversity, and taxa richness. The principal objectives of this study are to characterize the seasonal aquatic macroinvertebrate assemblage in a freshwater emergent marsh and compare aquatic macroinvertebrate species composition, density, and taxonomic diversity to that of freshwater marsh ponds. We hypothesize that 1) freshwater emergent marsh has lower seasonal density and taxonomic diversity compared to that of freshwater marsh ponds; and 2) freshwater emergent marsh has lower taxa richness than freshwater marsh ponds. Seasonal aquatic macroinvertebrate density in freshwater emergent marsh ranged from 0 organisms/m2 (summer 2009) to 91.1 ± 20.53 organisms/m2 (mean ± SE; spring 2009). Density in spring was higher than in all other seasons. Taxonomic diversity did not differ and there were no unique species in the freshwater emergent marsh. Our data only partially support our first hypothesis as aquatic macroinvertebrate density and taxonomic diversity between freshwater emergent marsh and ponds did not differ in spring, fall, and winter but ponds supported higher macroinvertebrate densities than freshwater emergent marsh during summer. However, our data did not support our second hypothesis as taxa richness between freshwater emergent marsh and ponds did not statistically differ.
Implications of Cognitive Load for Hypothesis Generation and Probability Judgment
Sprenger, Amber M.; Dougherty, Michael R.; Atkins, Sharona M.; Franco-Watkins, Ana M.; Thomas, Rick P.; Lange, Nicholas; Abbs, Brandon
2011-01-01
We tested the predictions of HyGene (Thomas et al., 2008) that both divided attention at encoding and judgment should affect the degree to which participants’ probability judgments violate the principle of additivity. In two experiments, we showed that divided attention during judgment leads to an increase in subadditivity, suggesting that the comparison process for probability judgments is capacity limited. Contrary to the predictions of HyGene, a third experiment revealed that divided attention during encoding leads to an increase in later probability judgment made under full attention. The effect of divided attention during encoding on judgment was completely mediated by the number of hypotheses participants generated, indicating that limitations in both encoding and recall can cascade into biases in judgments. PMID:21734897
Oak regeneration and overstory density in the Missouri Ozarks
David R. Larsen; Monte A. Metzger
1997-01-01
Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swanson, James
The Dopamine (DA) Hypothesis of ADHD (Wender, 1971; Levy, 1990) suggests that abnormalities in the synaptic mechanisms of DA transmission may be disrupted, and specific abnormalities in DA receptors and DA transporters (DAT) have been proposed (see Swanson et al, 1998). Early studies with small samples (e.g., n = 6, Dougherty et al, 1999) used single photon emission tomography (SPECT) and the radioligand (123I Altropane) to test a theory that ADHD may be caused by an over expression of DAT and reported 'a 70% increase in age-corrected dopamine transporter density in patients with attention deficit hyperactivity disorder compared with healthymore » controls' and suggested that treatment with stimulant medication decreased DAT density in ADHD patients and corrected an underlying abnormality (Krause et al, 2000). The potential importance of these findings was noted by Swanson (1999): 'If true, this is a major finding and points the way for new investigations of the primary pharmacological treatment for ADHD (with the stimulant drugs - e.g., methylphenidate), for which the dopamine transporter is the primary site of action. The potential importance of this finding demands special scrutiny'. This has been provided over the past decade using Positron Emission Tomography (PET). Brain imaging studies were conducted at Brookhaven National Laboratory (BNL) in a relatively large sample of stimulant-naive adults assessed for DAT (11C cocaine) density and DA receptors (11C raclopride) availability. These studies (Volkow et al, 2007; Volkow et al, 2009) do not confirm the hypothesis of increased DAT density and suggest the opposite (i.e., decreased rather than increased DAT density), and follow-up after treatment (Wang et al, 2010) does not confirm the hypothesis that therapeutic doses of methylphenidate decrease DAT density and suggests the opposite (i.e., increased rather than decreased DAT density). The brain regions implicated by these PET imaging studies also suggest that a motivation deficit may contribute as much as an attention deficit to the manifestation of behaviors that underlie the symptoms of ADHD.« less
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Relative effects of plumage coloration and vegetation density on nest success
Miller, M.W.
1999-01-01
Many passerine species are highly dichromatic with brightly-colored males and cryptically-colored females. Bright plumage in males is commonly thought to arise as a result of sexual selection by females such that males with bright coloration possess high fitness. However, bright plumage potentially could expose males to increased predation risk. Consistent with this idea, males of many highly dichromatic passerine species do not incubate. I tested whether brightly-colored males avoid incubation to reduce the probability of visual predators locating their nest. This hypothesis predicts greater hatching success for clutches incubated by cryptically-colored individuals than by brightly-colored individuals. The Northern Cardinal (Cardinalis cardinalis) is a common dichromatic species that breeds throughout the eastern U.S. I placed two button-quail (Turnix st).) eggs in each of 203 simulated cardinal nests. Dull brown cardboard, simulating a female cardinal, was placed over about half of all clutches. Bright red cardboard, simulating a male cardinal, was placed over the other clutches. Nest success was highest for well-concealed nests (87%) and lowest for nests in open habitat (54%). Nests containing red cardboard did not have significantly lower success than nests with brown cardboard, nor did I detect a significant color X vegetation-density interaction. My analysis may have had insufficient power to detect an effect of color on nest success; alternatively, brightly-colored males that do not incubate may achieve benefits unrelated to predation risk.
Spatial Metrics of Tumour Vascular Organisation Predict Radiation Efficacy in a Computational Model
Scott, Jacob G.
2016-01-01
Intratumoural heterogeneity is known to contribute to poor therapeutic response. Variations in oxygen tension in particular have been correlated with changes in radiation response in vitro and at the clinical scale with overall survival. Heterogeneity at the microscopic scale in tumour blood vessel architecture has been described, and is one source of the underlying variations in oxygen tension. We seek to determine whether histologic scale measures of the erratic distribution of blood vessels within a tumour can be used to predict differing radiation response. Using a two-dimensional hybrid cellular automaton model of tumour growth, we evaluate the effect of vessel distribution on cell survival outcomes of simulated radiation therapy. Using the standard equations for the oxygen enhancement ratio for cell survival probability under differing oxygen tensions, we calculate average radiation effect over a range of different vessel densities and organisations. We go on to quantify the vessel distribution heterogeneity and measure spatial organization using Ripley’s L function, a measure designed to detect deviations from complete spatial randomness. We find that under differing regimes of vessel density the correlation coefficient between the measure of spatial organization and radiation effect changes sign. This provides not only a useful way to understand the differences seen in radiation effect for tissues based on vessel architecture, but also an alternate explanation for the vessel normalization hypothesis. PMID:26800503
Biochemical correlates in an animal model of depression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J.O.
1986-01-01
A valid animal model of depression was used to explore specific adrenergic receptor differences between rats exhibiting aberrant behavior and control groups. Preliminary experiments revealed a distinct upregulation of hippocampal beta-receptors (as compared to other brain regions) in those animals acquiring a response deficit as a result of exposure to inescapable footshock. Concurrent studies using standard receptor binding techniques showed no large changes in the density of alpha-adrenergic, serotonergic, or dopaminergic receptor densities. This led to the hypothesis that the hippocampal beta-receptor in responses deficient animals could be correlated with the behavioral changes seen after exposure to the aversive stimulus.more » Normalization of the behavior through the administration of antidepressants could be expected to reverse the biochemical changes if these are related to the mechanism of action of antidepressant drugs. This study makes three important points: (1) there is a relevant biochemical change in the hippocampus of response deficient rats which occurs in parallel to a well-defined behavior, (2) the biochemical and behavioral changes are normalized by antidepressant treatments exhibiting both serotonergic and adrenergic mechanisms of action, and (3) the mode of action of antidepressants in this model is probably a combination of serotonergic and adrenergic influences modulating the hippocampal beta-receptor. These results are discussed in relation to anatomical and biochemical aspects of antidepressant action.« less
Sage, R.W.; Porter, W.F.; Underwood, H.B.
2003-01-01
Herbivory, lighting regimes, and site conditions are among the most important determinants of forest regeneration success, but these are affected by a host of other factors such as weather, predation, human exploitation, pathogens, wind and fire. We draw together > 50 years of research on the Huntington Wildlife Forest in the central Adirondack Mountains of New York to explore regeneration of northern hardwoods. A series of studies each of which focused on a single factor failed to identify the cause of regeneration failure. However, integration of these studies led to broader understanding of the process of forest stand development and identified at least three interacting factors: lighting regime, competing vegetation and selective browsing by white-tailed deer (Odocoileus virginianus). The diverse 100-200 year-old hardwood stands present today probably reflect regeneration during periods of low deer density (< 2.0 deer/km super(2)) and significant forest disturbance. If this hypothesis is correct, forest managers can mimic these 'natural windows of opportunity' through manipulation of a few sensitive variables in the system. Further, these manipulations can be conducted on a relatively small geographic scale. Control of deer densities on a scale of 500 ha and understory American beech (Fagus grandifolia) on a scale of < 100 ha in conjunction with an even-aged regeneration system consistently resulted in successful establishment of desirable hardwood regeneration.
On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1995-01-01
For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.
Propensity, Probability, and Quantum Theory
NASA Astrophysics Data System (ADS)
Ballentine, Leslie E.
2016-08-01
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.
Learning Problem-Solving Rules as Search Through a Hypothesis Space.
Lee, Hee Seung; Betts, Shawn; Anderson, John R
2016-07-01
Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem property such as computational difficulty of the rules biased the search process and so affected learning. Experiment 2 examined the impact of examples as instructional tools and found that their effectiveness was determined by whether they uniquely pointed to the correct rule. Experiment 3 compared verbal directions with examples and found that both could guide search. The final experiment tried to improve learning by using more explicit verbal directions or by adding scaffolding to the example. While both manipulations improved learning, learning still took the form of a search through a hypothesis space of possible rules. We describe a model that embodies two assumptions: (1) the instruction can bias the rules participants hypothesize rather than directly be encoded into a rule; (2) participants do not have memory for past wrong hypotheses and are likely to retry them. These assumptions are realized in a Markov model that fits all the data by estimating two sets of probabilities. First, the learning condition induced one set of Start probabilities of trying various rules. Second, should this first hypothesis prove wrong, the learning condition induced a second set of Choice probabilities of considering various rules. These findings broaden our understanding of effective instruction and provide implications for instructional design. Copyright © 2015 Cognitive Science Society, Inc.
PASTIS: Bayesian extrasolar planet validation - I. General framework, models, and performance
NASA Astrophysics Data System (ADS)
Díaz, R. F.; Almenara, J. M.; Santerne, A.; Moutou, C.; Lethuillier, A.; Deleuil, M.
2014-06-01
A large fraction of the smallest transiting planet candidates discovered by the Kepler and CoRoT space missions cannot be confirmed by a dynamical measurement of the mass using currently available observing facilities. To establish their planetary nature, the concept of planet validation has been advanced. This technique compares the probability of the planetary hypothesis against that of all reasonably conceivable alternative false positive (FP) hypotheses. The candidate is considered as validated if the posterior probability of the planetary hypothesis is sufficiently larger than the sum of the probabilities of all FP scenarios. In this paper, we present PASTIS, the Planet Analysis and Small Transit Investigation Software, a tool designed to perform a rigorous model comparison of the hypotheses involved in the problem of planet validation, and to fully exploit the information available in the candidate light curves. PASTIS self-consistently models the transit light curves and follow-up observations. Its object-oriented structure offers a large flexibility for defining the scenarios to be compared. The performance is explored using artificial transit light curves of planets and FPs with a realistic error distribution obtained from a Kepler light curve. We find that data support the correct hypothesis strongly only when the signal is high enough (transit signal-to-noise ratio above 50 for the planet case) and remain inconclusive otherwise. PLAnetary Transits and Oscillations of stars (PLATO) shall provide transits with high enough signal-to-noise ratio, but to establish the true nature of the vast majority of Kepler and CoRoT transit candidates additional data or strong reliance on hypotheses priors is needed.
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
Laboratory-Tutorial Activities for Teaching Probability
ERIC Educational Resources Information Center
Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.
2006-01-01
We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…
Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA
Yarra, Allyson N.; Magoulick, Daniel D.
2018-01-01
Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Committor of elementary reactions on multistate systems
NASA Astrophysics Data System (ADS)
Király, Péter; Kiss, Dóra Judit; Tóth, Gergely
2018-04-01
In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Murn, Campbell; Holloway, Graham J
2016-10-01
Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.
Exploring the relationship between population density and maternal health coverage
2012-01-01
Background Delivering health services to dense populations is more practical than to dispersed populations, other factors constant. This engenders the hypothesis that population density positively affects coverage rates of health services. This hypothesis has been tested indirectly for some services at a local level, but not at a national level. Methods We use cross-sectional data to conduct cross-country, OLS regressions at the national level to estimate the relationship between population density and maternal health coverage. We separately estimate the effect of two measures of density on three population-level coverage rates (6 tests in total). Our coverage indicators are the fraction of the maternal population completing four antenatal care visits and the utilization rates of both skilled birth attendants and in-facility delivery. The first density metric we use is the percentage of a population living in an urban area. The second metric, which we denote as a density score, is a relative ranking of countries by population density. The score’s calculation discounts a nation’s uninhabited territory under the assumption those areas are irrelevant to service delivery. Results We find significantly positive relationships between our maternal health indicators and density measures. On average, a one-unit increase in our density score is equivalent to a 0.2% increase in coverage rates. Conclusions Countries with dispersed populations face higher burdens to achieve multinational coverage targets such as the United Nations’ Millennial Development Goals. PMID:23170895
NASA Astrophysics Data System (ADS)
Ernst, Gerhard; Hüttemann, Andreas
2010-01-01
List of contributors; 1. Introduction Gerhard Ernst and Andreas Hütteman; Part I. The Arrows of Time: 2. Does a low-entropy constraint prevent us from influencing the past? Mathias Frisch; 3. The part hypothesis meets gravity Craig Callender; 4. Quantum gravity and the arrow of time Claus Kiefer; Part II. Probability and Chance: 5. The natural-range conception of probability Jacob Rosenthal; 6. Probability in Boltzmannian statistical mechanics Roman Frigg; 7. Humean mechanics versus a metaphysics of powers Michael Esfeld; Part III. Reduction: 8. The crystallisation of Clausius's phenomenological thermodynamics C. Ulises Moulines; 9. Reduction and renormalization Robert W. Batterman; 10. Irreversibility in stochastic dynamics Jos Uffink; Index.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Can we estimate molluscan abundance and biomass on the continental shelf?
NASA Astrophysics Data System (ADS)
Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.
2017-11-01
Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.
NASA Astrophysics Data System (ADS)
Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.
2018-04-01
We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.
Lindsen, Job P; de Jong, Ritske
2010-10-01
Lien, Ruthruff, Remington, & Johnston (2005) reported residual switch cost differences between stimulus-response (S-R) pairs and proposed the partial-mapping preparation (PMP) hypothesis, which states that advance preparation will typically be limited to a subset of S-R pairs because of structural capacity limitations, to account for these differences. Alternatively, the failure-to-engage (FTE) hypothesis does not allow for differences in probability of advance preparation between S-R pairs within a set; it accounts for residual switch cost differences by assuming that benefits of advance preparation may differ between S-R pairs. Three Experiments were designed to test between these hypotheses. No capacity limitations of the type assumed by the PMP hypothesis were found for many participants in Experiment 1. In Experiments 2 and 3, no evidence was found for the dependency of residual switch cost differences between S-R pairs on response-stimulus interval that is predicted by the PMP hypothesis. Mixture-model analysis of reaction times distributions in Experiment 3 provided strong support for the FTE hypothesis over the PMP hypothesis. Simulation studies with a computational implementation of the FTE hypothesis showed that it is able to account in great detail for the results of the present study. Together, these results provide strong evidence against the PMP hypothesis and support the FTE hypothesis that advance preparation probabilistically fails or succeeds at the level of the task set. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
The ranking probability approach and its usage in design and analysis of large-scale studies.
Kuo, Chia-Ling; Zaykin, Dmitri
2013-01-01
In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.
Retraction of cold drawn polyethylene: the influence of lamellar thickeness and density
NASA Technical Reports Server (NTRS)
Falender, J. R.; Hansen, D.
1971-01-01
The role of crystal morphology in the retraction of oriented, linear polyethylene was studied utilizing samples crystallized under conditions controlled to vary, separately, lamellar crystal thickness and density. Samples were oriented in a simple shear deformation to a strain of 4.0 prior to measuring retraction tendency in creep and relaxation type tests. Characterizations of specimens were made using wide and small angle X-ray techniques. The specific morphological variations were chosen to test the hypothesis that a long range elastic restoring force can originate in conjunction with deformation of lamellar crystals and the consequent increase in lamellar crystal surface area and surface free energy. The results support this hypothesis.
Retraction of cold-drawn polyethylene - Influence of lamellar thickness and density.
NASA Technical Reports Server (NTRS)
Falender, J. R.; Hansen, D.
1972-01-01
The role of crystal morphology in the retraction of oriented linear polyethylene was studied utilizing samples crystallized under conditions controlled to vary, separately, lamellar crystal thickness and density. Samples were oriented in a simple shear deformation to a strain of 4.0 prior to measuring retraction tendency in creep- and relaxation-type tests. Characterizations of specimens were made using wide- and small-angle x-ray techniques. The specific morphological variations were chosen to test the hypothesis that a long-range elastic restoring force can originate in conjunction with deformation of lamellar crystals and the consequent increase in lamellar crystal surface area and surface free energy. The results support this hypothesis.
Balmori, Alfonso; Hallberg, Orjan
2007-01-01
During recent decades, there has been a marked decline of the house sparrow (Passer domesticus) population in the United Kingdom and in several western European countries. The aims of this study were to determine whether the population is also declining in Spain and to evaluate the hypothesis that electromagnetic radiation (microwaves) from phone antennae is correlated with the decline in the sparrow population. Between October 2002 and May 2006, point transect sampling was performed at 30 points during 40 visits to Valladolid, Spain. At each point, we carried out counts of sparrows and measured the mean electric field strength (radiofrequencies and microwaves: 1 MHz-3 GHz range). Significant declines (P = 0.0037) were observed in the mean bird density over time, and significantly low bird density was observed in areas with high electric field strength. The logarithmic regression of the mean bird density vs. field strength groups (considering field strength in 0.1 V/m increments) was R = -0.87 (P = 0.0001). The results of this article support the hypothesis that electromagnetic signals are associated with the observed decline in the sparrow population. We conclude that electromagnetic pollution may be responsible, either by itself or in combination with other factors, for the observed decline of the species in European cities during recent years. The appearently strong dependence between bird density and field strength according to this work could be used for a more controlled study to test the hypothesis.
NASA Astrophysics Data System (ADS)
Wellons, Sarah; Torrey, Paul
2017-06-01
Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.
Radiative transition of hydrogen-like ions in quantum plasma
NASA Astrophysics Data System (ADS)
Hu, Hongwei; Chen, Zhanbin; Chen, Wencong
2016-12-01
At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Community organization moderates the effect of alcohol outlet density on violence.
Pridemore, William Alex; Grubesic, Tony H
2012-12-01
There is growing evidence from multiple disciplines that alcohol outlet density is associated with community levels of assault. Based on the theoretical and empirical literatures on social organization and crime, we tested the hypothesis that the association between alcohol outlet density and neighbourhood violence rates is moderated by social organization. Using geocoded police data on assaults, geocoded data on the location of alcohol outlets, and controlling for several structural factors thought to be associated with violence rates, we tested this hypothesis employing negative binomial regression with our sample of 298 block groups in Cincinnati. Our results revealed direct effects of alcohol outlet density and social organization on assault density, and these effects held for different outlet types (i.e., off-premise, bars, restaurants) and levels of harm (i.e., simple and aggravated assaults). More importantly, we found that the strength of the outlet-assault association was significantly weaker in more socially organized communities. Subsequent analyses by level of organization revealed no effects of alcohol outlet density on aggravated assaults in organized block groups, but significant effects in disorganized block groups. We found no association between social (dis)organization and outlet density. These results clarify the community-level relationship between alcohol outlets and violence and have important implications for municipal-level alcohol policies. © London School of Economics and Political Science 2012.
Emergency medical services and congestion : urban sprawl and pre-hospital emergency care time.
DOT National Transportation Integrated Search
2009-01-01
This research measured the association between urban sprawl and emergency medical service (EMS) response time. The purpose was to test the hypothesis that features of the built environment increase the probability of delayed ambulance arrival. Using ...
New methods of testing nonlinear hypothesis using iterative NLLS estimator
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.
Epidemics in interconnected small-world networks.
Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong
2015-01-01
Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Horn, J. E.; Harter, T.
2011-06-01
Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.
Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Woo, K. T.
1978-01-01
The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.
RADC Multi-Dimensional Signal-Processing Research Program.
1980-09-30
Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hethcoat, Matthew G.
Natural gas development has rapidly increased within sagebrush ( Artemisia spp.) dominated landscapes of the Intermountain West. Prior research in the Upper Green River Basin, Wyoming demonstrated increased nest predation of sagebrush-obligate songbirds with higher densities of natural gas wells. To better understand the mechanisms underlying this pattern, I assessed this commonly used index of oil and gas development intensity (well density) for estimating habitat transformation and predicting nest survival for songbirds breeding in energy fields during 2008- 2009 and 2011-2012. We calculated landscape metrics (habitat loss, amount of edge, patch shape complexity, and mean patch size) to identify the aspect of landscape transformation most captured by well density. Well density was most positively associated with the amount of habitat loss within 1 square kilometer. Daily nest survival was relatively invariant with respect to well density for all three species. In contrast, nest survival rates of all three species consistently decreased with increased surrounding habitat loss due to energy development. Thus, although well density and habitat loss were strongly correlated, at times they provided contrasting estimates of nest survival probability. Additionally, we tested the hypothesis that surrounding habitat loss influenced local nest predation rates via increased predator activity. During 2011- 2012, we surveyed predators and monitored songbird nests at twelve sites in western Wyoming. Nine species, representing four mammalian and three avian families, were video-recorded depredating eggs and nestlings. Approximately 75% of depredation events were caused by rodents. While chipmunk (Tamias minimus) detections were negatively associated with increased habitat loss, mice (Peromyscus maniculatus and Reithrodontomys megalotis) and ground squirrels (Ictidomys tridecemlineatus and Urocitellus armatus) increased with greater surrounding habitat loss. Consistent with our predictions, nest survival significantly declined at study locations with greater predator activity for Brewer's Sparrows (Spizella breweri) and Sagebrush Sparrows (Artemisiospiza nevadensis ). Our work is among the few studies which have identified mechanisms underlying increased nest predation rates, linking predation patterns, nest predators, and human-induced habitat alteration. Our results demonstrate the importance of simultaneous study of habitat change, predators, and prey in understanding the mechanisms by which evolved predator-prey relationships can be affected by human-induced rapid environmental change.
Does interference competition with wolves limit the distribution and abundance of coyotes?
Berger, Kim Murray; Gese, Eric M
2007-11-01
Interference competition with wolves Canis lupus is hypothesized to limit the distribution and abundance of coyotes Canis latrans, and the extirpation of wolves is often invoked to explain the expansion in coyote range throughout much of North America. We used spatial, seasonal and temporal heterogeneity in wolf distribution and abundance to test the hypothesis that interference competition with wolves limits the distribution and abundance of coyotes. From August 2001 to August 2004, we gathered data on cause-specific mortality and survival rates of coyotes captured at wolf-free and wolf-abundant sites in Grand Teton National Park (GTNP), Wyoming, USA, to determine whether mortality due to wolves is sufficient to reduce coyote densities. We examined whether spatial segregation limits the local distribution of coyotes by evaluating home-range overlap between resident coyotes and wolves, and by contrasting dispersal rates of transient coyotes captured in wolf-free and wolf-abundant areas. Finally, we analysed data on population densities of both species at three study areas across the Greater Yellowstone Ecosystem (GYE) to determine whether an inverse relationship exists between coyote and wolf densities. Although coyotes were the numerically dominant predator, across the GYE, densities varied spatially and temporally in accordance with wolf abundance. Mean coyote densities were 33% lower at wolf-abundant sites in GTNP, and densities declined 39% in Yellowstone National Park following wolf reintroduction. A strong negative relationship between coyote and wolf densities (beta = -3.988, P < 0.005, r(2) = 0.54, n = 16), both within and across study sites, supports the hypothesis that competition with wolves limits coyote populations. Overall mortality of coyotes resulting from wolf predation was low, but wolves were responsible for 56% of transient coyote deaths (n = 5). In addition, dispersal rates of transient coyotes captured at wolf-abundant sites were 117% higher than for transients captured in wolf-free areas. Our results support the hypothesis that coyote abundance is limited by competition with wolves, and suggest that differential effects on survival and dispersal rates of transient coyotes are important mechanisms by which wolves reduce coyote densities.
Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Leahy, D. A.
2017-03-01
Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.
Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
Predictions of malaria vector distribution in Belize based on multispectral satellite data.
Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J
1996-03-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Predictions of malaria vector distribution in Belize based on multispectral satellite data
NASA Technical Reports Server (NTRS)
Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.
1996-01-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation
Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.
1998-01-01
We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo
2018-01-01
Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.
An experimental test of whether habitat corridors affect pollen transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, Patricia A.; Levey, Douglas J.
Abstract. Negative effects of habitat fragmentation are thought to be diminished when habitat patches are joined by a corridor. A key assumption is that corridors facilitate exchange rates of organisms between otherwise isolated patches. If the organisms are pollinators, corridors may be important for maintaining genetically viable populations of the plants that they pollinate. We tested the hypothesis that corridors increase the movement of insect pollinators into patches of habitat and thereby increase pollen transfer for two species of plants, one pollinated by butterflies (Lantana camara) and the other by bees and wasps (Rudbeckia hirta). We worked in an experimentalmore » landscape consisting of 40 greater than or equal to 1-ha patches of early-successional habitat in a matrix of forest. Within each of eight experimental units, two patches were connected by a corridor (150 X 25 m), and three were not. Patch shape varied to control for the area added by the presence of a corridor. Differences in patch shape also allowed us to test alternative hypotheses of how corridors might function. The Traditional Corridor Hypothesis posits that corridors increase immigration and emigration by functioning as movement conduits between patches. The Drift Fence Hypothesis posits that corridors function by ‘‘capturing’’ organisms dispersing through the matrix, redirecting them into associated habitat patches. Using fluorescent powder to track pollen, we found that pollen transfer by butterflies between patches connected by a corridor was significantly higher than between unconnected patches (all values mean plus or minus 1 SE: 59% plus or minus 9.2% vs. 25% plus or minus 5.2% of flowers receiving pollen). Likewise, pollen transfer by bees and wasps was significantly higher between connected patches than between unconnected patches (30% plus or minus 4.2% vs. 14.5% plus or minus 2.2%). These results support the Traditional Corridor Hypothesis. There was little support, however, for the Drift Fence Hypothesis. To generalize our results to a larger scale, we measured the probability of pollen transfer by butterflies as a function of distance along a 2000 X 75 m corridor. Pollen transfer probability exponentially declined with respect to distance and successfully predicted pollen transfer probability on the scale of our previous experiment. These results suggest that corridors facilitate pollen transfer in fragmented landscapes.« less
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Stochastic transport models for mixing in variable-density turbulence
NASA Astrophysics Data System (ADS)
Bakosi, J.; Ristorcelli, J. R.
2011-11-01
In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
Genetic variation in natural honeybee populations, Apis mellifera capensis
NASA Astrophysics Data System (ADS)
Hepburn, Randall; Neumann, Peter; Radloff, Sarah E.
2004-09-01
Genetic variation in honeybee, Apis mellifera, populations can be considerably influenced by breeding and commercial introductions, especially in areas with abundant beekeeping. However, in southern Africa apiculture is based on the capture of wild swarms, and queen rearing is virtually absent. Moreover, the introduction of European subspecies constantly failed in the Cape region. We therefore hypothesize a low human impact on genetic variation in populations of Cape honeybees, Apis mellifera capensis. A novel solution to studying genetic variation in honeybee populations based on thelytokous worker reproduction is applied to test this hypothesis. Environmental effects on metrical morphological characters of the phenotype are separated to obtain a genetic residual component. The genetic residuals are then re-calculated as coefficients of genetic variation. Characters measured included hair length on the abdomen, width and length of wax plate, and three wing angles. The data show for the first time that genetic variation in Cape honeybee populations is independent of beekeeping density and probably reflects naturally occurring processes such as gene flow due to topographic and climatic variation on a microscale.
Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-11-23
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.
A fast ellipse extended target PHD filter using box-particle implementation
NASA Astrophysics Data System (ADS)
Zhang, Yongquan; Ji, Hongbing; Hu, Qi
2018-01-01
This paper presents a box-particle implementation of the ellipse extended target probability hypothesis density (ET-PHD) filter, called the ellipse extended target box particle PHD (EET-BP-PHD) filter, where the extended targets are described as a Poisson model developed by Gilholm et al. and the term "box" is here equivalent to the term "interval" used in interval analysis. The proposed EET-BP-PHD filter is capable of dynamically tracking multiple ellipse extended targets and estimating the target states and the number of targets, in the presence of clutter measurements, false alarms and missed detections. To derive the PHD recursion of the EET-BP-PHD filter, a suitable measurement likelihood is defined for a given partitioning cell, and the main implementation steps are presented along with the necessary box approximations and manipulations. The limitations and capabilities of the proposed EET-BP-PHD filter are illustrated by simulation examples. The simulation results show that a box-particle implementation of the ET-PHD filter can avoid the high number of particles and reduce computational burden, compared to a particle implementation of that for extended target tracking.
Camera traps reveal an apparent mutualism between a common mesocarnivore and an endangered ungulate
Cove, Michael V.; Maurer, Andrew S.; O'Connell, Allan F.
2017-01-01
Camera traps are commonly used to study mammal ecology and they occasionally capture previously undocumented species interactions. The key deer (Odocoileus virginianus clavium) is an endangered endemic subspecies of the Florida Keys, where it exists with few predators. We obtained a camera trap sequence of 80 photos in which a key deer interacted with two northern raccoons (Procyon lotor). One of the raccoons groomed the deer’s face for ∼1 min. This interaction is peculiar and appears mutualistic because the deer was not concerned and willingly remained still throughout the physical contact. Although mutualistic relationships between deer and birds are common, we are unaware of any previously documented mesocarnivore-deer mutualisms. Key deer have evolved in the absence of mammalian predators and we hypothesize that they exhibit reduced vigilance or concern when encountering other species because of predator naivety. Key deer and raccoons are commonly associated with humans and urbanization and an alternative hypothesis is that the interactions are a consequence of heightened deer density, causing a greater probability of sustained interactions with the common mesocarnivores.
Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun
2013-01-01
Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
A partial differential equation for pseudocontact shift.
Charnock, G T P; Kuprov, Ilya
2014-10-07
It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
NASA Astrophysics Data System (ADS)
Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping
2015-05-01
It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.
Aguilera, Moisés A; Broitman, Bernardo R; Thiel, Martin
2016-07-01
Coastal urban infrastructures are proliferating across the world, but knowledge about their emergent impacts is still limited. Here, we provide evidence that urban artificial reefs have a high potential to accumulate the diverse forms of litter originating from anthropogenic activities around cities. We test the hypothesis that the structural complexity of urban breakwaters, when compared with adjacent natural rocky intertidal habitats, is a driver of anthropogenic litter accumulation. We determined litter abundances at seven sites (cities) and estimated the structural complexity in both urban breakwaters and adjacent natural habitats from northern to central Chile, spanning a latitudinal gradient of ∼15° (18°S to 33°S). Anthropogenic litter density was significantly higher in coastal breakwaters when compared to natural habitats (∼15.1 items m(-2) on artificial reefs versus 7.4 items m(-2) in natural habitats) at all study sites, a pattern that was temporally persistent. Different litter categories were more abundant on the artificial reefs than in natural habitats, with local human population density and breakwater extension contributing to increase the probabilities of litter occurrence by ∼10%. In addition, structural complexity was about two-fold higher on artificial reefs, with anthropogenic litter density being highest at intermediate levels of structural complexity. Therefore, the spatial structure characteristic of artificial reefs seems to enhance anthropogenic litter accumulation, also leading to higher residence time and degradation potential. Our study highlights the interaction between coastal urban habitat modification by establishment of artificial reefs, and pollution. This emergent phenomenon is an important issue to be considered in future management plans and the engineering of coastal ecosystems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Probability of stress-corrosion fracture under random loading.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.
The role of probability arguments in the history of science.
Weinert, Friedel
2010-03-01
The paper examines Wesley Salmon's claim that the primary role of plausibility arguments in the history of science is to impose constraints on the prior probability of hypotheses (in the language of Bayesian confirmation theory). A detailed look at Copernicanism and Darwinism and, more briefly, Rutherford's discovery of the atomic nucleus reveals a further and arguably more important role of plausibility arguments. It resides in the consideration of likelihoods, which state how likely a given hypothesis makes a given piece of evidence. In each case the likelihoods raise the probability of one of the competing hypotheses and diminish the credibility of its rival, and this may happen either on the basis of 'old' or 'new' evidence.
The role of demographic compensation theory in incidental take assessments for endangered species
McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts
2011-01-01
Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps
Adam Brandt
2015-11-15
This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Information Density and Syntactic Repetition.
Temperley, David; Gildea, Daniel
2015-11-01
In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.
Landscape characteristics influence pond occupancy by frogs after accounting for detectability
Mazerolle, M.J.; Desrochers, A.; Rochefort, L.
2005-01-01
Many investigators have hypothesized that landscape attributes such as the amount and proximity of habitat are important for amphibian spatial patterns. This has produced a number of studies focusing on the effects of landscape characteristics on amphibian patterns of occurrence in patches or ponds, most of which conclude that the landscape is important. We identified two concerns associated with these studies: one deals with their applicability to other landscape types, as most have been conducted in agricultural landscapes; the other highlights the need to account for the probability of detection. We tested the hypothesis that landscape characteristics influence spatial patterns of amphibian occurrence at ponds after accounting for the probability of detection in little-studied peatland landscapes undergoing peat mining. We also illustrated the costs of not accounting for the probability of detection by comparing our results to conventional logistic regression analyses. Results indicate that frog occurrence increased with the percent cover of ponds within 100, 250, and 1000 m, as well as the amount of forest cover within 1000 m. However, forest cover at 250 m had a negative influence on frog presence at ponds. Not accounting for the probability of detection resulted in underestimating the influence of most variables on frog occurrence, whereas a few were overestimated. Regardless, we show that conventional logistic regression can lead to different conclusions than analyses accounting for detectability. Our study is consistent with the hypothesis that landscape characteristics are important in determining the spatial patterns of frog occurrence at ponds. We strongly recommend estimating the probability of detection in field surveys, as this will increase the quality and conservation potential of models derived from such data. ?? 2005 by the Ecological Society of America.
A single-gene explanation for the probability of having idiopathic talipes equinovarus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rebbeck, T.R.; Buetow, K.H.; Dietz, F.R.
1993-11-01
It has been hypothesized that the pathogenesis of idiopathic talipes equinovarus (ITEV, or clubfoot) is explained by genetic regulation of development and growth. The objective of the present study was to determine whether a single Mendelian gene explains the probability of having ITEV in a sample of 143 Caucasian pedigrees from Iowa. These pedigrees were ascertained through probands with ITEV. Complex segregation analyses were undertaken using a regressive logistic model. The results of these analyses strongly rejected the hypotheses that the probability of having ITEV in these pedigrees was explained by a non-Mendelian pattern of transmission with residual sibling correlation,more » a nontransmitted (environmental) factor with residual sibling correlation, or residual sibling correlation alone. These results were consistent with the hypothesis that the probability of having ITEV was explained by the Mendelian segregation of a single gene with two alleles plus the effects of some unmeasured factor(s) shared among siblings. The segregation of alleles at this single Mendelian gene indicated that the disease allele A was incompletely dominant to the nondisease allele B. The disease allele A, associated with ITEV affection, was estimated to occur in the population of inference with a frequency of .007. After adjusting for sex-specific population incidences of ITEV, the conditional probability (penetrance) of ITEV affection given the AA, AB, and BB genotypes was computed to be 1.0, 0.039, and .0006, respectively. Individual pedigrees in this sample that most strongly supported the single Mendelian gene hypothesis were identified. These pedigrees are candidates for genetic linkage analyses or DNA association studies. 35 refs., 2 figs., 7 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastner, S.O.; Bhatia, A.K.
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less
Affective and cognitive factors influencing sensitivity to probabilistic information.
Tyszka, Tadeusz; Sawicki, Przemyslaw
2011-11-01
In study 1 different groups of female students were randomly assigned to one of four probabilistic information formats. Five different levels of probability of a genetic disease in an unborn child were presented to participants (within-subject factor). After the presentation of the probability level, participants were requested to indicate the acceptable level of pain they would tolerate to avoid the disease (in their unborn child), their subjective evaluation of the disease risk, and their subjective evaluation of being worried by this risk. The results of study 1 confirmed the hypothesis that an experience-based probability format decreases the subjective sense of worry about the disease, thus, presumably, weakening the tendency to overrate the probability of rare events. Study 2 showed that for the emotionally laden stimuli, the experience-based probability format resulted in higher sensitivity to probability variations than other formats of probabilistic information. These advantages of the experience-based probability format are interpreted in terms of two systems of information processing: the rational deliberative versus the affective experiential and the principle of stimulus-response compatibility. © 2011 Society for Risk Analysis.
The Hygiene Hypothesis in the Age of the Microbiome.
Ege, Markus J
2017-11-01
The original version of the hygiene hypothesis suggested that infections transmitted early in life by "unhygienic contact" prevented allergies. Examples were endemic fecal-oral infections by viral, bacterial, or protozoic pathogens, such as hepatitis A virus, Helicobacter pylori, or Toxoplasma gondii. Later, this concept also included microorganisms beyond pathogens, such as commensals and symbionts, and the hygiene hypothesis was extended to inflammatory diseases in general. An impressive illustration of the hygiene hypothesis was found in the consistent farm effect on asthma and allergies, which has partly been attributed to immunomodulatory properties of endotoxin as emitted by livestock. Assessment of environmental microorganisms by molecular techniques suggested an additional protective effect of microbial diversity on asthma beyond atopy. Whether microbial diversity stands for a higher probability to encounter protective clusters of microorganisms or whether it is a proxy of a balanced environmental exposure remains elusive. Diversity of the mucosal microbiome of the upper airways probably reflects an undisturbed balance of beneficial microorganisms and pathogens, such as Moraxella catarrhalis, which has been associated with subsequent development of asthma and pneumonia. In addition, specific fermenters of plant fibers, such as the genera Ruminococcus and Bacteroides, have been implied in asthma protection through production of short-chain fatty acids, volatile substances with the capability to reduce T-helper cell type 2-mediated allergic airway inflammation. Evolutionary thinking may offer a key to understanding noncommunicable inflammatory diseases as delayed adaptation to a world of fast and profound environmental changes. Better adaptation may be fostered by growing insight into the interplay between man and microbiome and an adequate choice of the environmental exposure.
Assessing tiger population dynamics using photographic capture-recapture sampling
Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Hines, J.E.
2006-01-01
Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, ?robust design? capture?recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of =K' =Y' 0.10 ? 0.069 (values are estimated mean ? SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 ? 0.051, and the estimated probability that a newly caught animal was a transient was = 0.18 ? 0.11. During the period when the sampled area was of constant size, the estimated population size Nt varied from 17 ? 1.7 to 31 ? 2.1 tigers, with a geometric mean rate of annual population change estimated as = 1.03 ? 0.020, representing a 3% annual increase. The estimated recruitment of new animals, Bt, varied from 0 ? 3.0 to 14 ? 2.9 tigers. Population density estimates, D, ranged from 7.33 ? 0.8 tigers/100 km2 to 21.73 ? 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.
Assessing tiger population dynamics using photographic capture-recapture sampling.
Karanth, K Ullas; Nichols, James D; Kumar, N Samba; Hines, James E
2006-11-01
Although wide-ranging, elusive, large carnivore species, such as the tiger, are of scientific and conservation interest, rigorous inferences about their population dynamics are scarce because of methodological problems of sampling populations at the required spatial and temporal scales. We report the application of a rigorous, noninvasive method for assessing tiger population dynamics to test model-based predictions about population viability. We obtained photographic capture histories for 74 individual tigers during a nine-year study involving 5725 trap-nights of effort. These data were modeled under a likelihood-based, "robust design" capture-recapture analytic framework. We explicitly modeled and estimated ecological parameters such as time-specific abundance, density, survival, recruitment, temporary emigration, and transience, using models that incorporated effects of factors such as individual heterogeneity, trap-response, and time on probabilities of photo-capturing tigers. The model estimated a random temporary emigration parameter of gamma" = gamma' = 0.10 +/- 0.069 (values are estimated mean +/- SE). When scaled to an annual basis, tiger survival rates were estimated at S = 0.77 +/- 0.051, and the estimated probability that a newly caught animal was a transient was tau = 0.18 +/- 0.11. During the period when the sampled area was of constant size, the estimated population size N(t) varied from 17 +/- 1.7 to 31 +/- 2.1 tigers, with a geometric mean rate of annual population change estimated as lambda = 1.03 +/- 0.020, representing a 3% annual increase. The estimated recruitment of new animals, B(t), varied from 0 +/- 3.0 to 14 +/- 2.9 tigers. Population density estimates, D, ranged from 7.33 +/- 0.8 tigers/100 km2 to 21.73 +/- 1.7 tigers/100 km2 during the study. Thus, despite substantial annual losses and temporal variation in recruitment, the tiger density remained at relatively high levels in Nagarahole. Our results are consistent with the hypothesis that protected wild tiger populations can remain healthy despite heavy mortalities because of their inherently high reproductive potential. The ability to model the entire photographic capture history data set and incorporate reduced-parameter models led to estimates of mean annual population change that were sufficiently precise to be useful. This efficient, noninvasive sampling approach can be used to rigorously investigate the population dynamics of tigers and other elusive, rare, wide-ranging animal species in which individuals can be identified from photographs or other means.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Research Design and Statistics for Applied Linguistics.
ERIC Educational Resources Information Center
Hatch, Evelyn; Farhady, Hossein
An introduction to the conventions of research design and statistical analysis is presented for graduate students of applied linguistics. The chapters cover such concepts as the definition of research, variables, research designs, research report formats, sorting and displaying data, probability and hypothesis testing, comparing means,…
The risks and returns of stock investment in a financial market
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2013-03-01
The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.
NASA Astrophysics Data System (ADS)
Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.
2018-07-01
The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
A comparative study of nonparametric methods for pattern recognition
NASA Technical Reports Server (NTRS)
Hahn, S. F.; Nelson, G. D.
1972-01-01
The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.
Enhanced tau neutrino appearance through invisible decay
NASA Astrophysics Data System (ADS)
Pagliaroli, Giulia; Di Marco, Natalia; Mannarelli, Massimo
2016-06-01
The decay of neutrino mass eigenstates leads to a change of the conversion and survival probability of neutrino flavor eigenstates. Exploiting the recent results released by the long-baseline OPERA experiment we perform the statistical investigation of the neutrino invisible decay hypothesis in the νμ→ντ appearance channel. We find that the neutrino decay provides an enhancement of the expected tau appearance signal with respect to the standard oscillation scenario for the long-baseline OPERA experiment. The increase of the νμ→ντ conversion probability by the decay of one of the mass eigenstates is due to a reduction of the "destructive interference" among the different massive neutrino components. Despite data showing a very mild preference for invisible decays with respect to the oscillations only hypothesis, we provide an upper limit for the neutrino decay lifetime in this channel of τ3/m3≳1.3 ×10-13 s /eV at the 90% confidence level.
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,
2014-01-01
Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
Relationships among North American songbird trends, habitat fragmentation, and landscape occupancy
Therese M. Donovan; Curtis H. Flather
2002-01-01
Fragmentation of breeding habitat has been hypothesized as a cause of population declines in forest-nesting migratory birds. Negative correlations between the degree of fragmentation and bird density or fecundity at local or regional scales support the fragmentation hypothesis. Yet, in spite of reduced fecundity and densities in fragmented systems, many forest-nesting...
Association Between Increased Vascular Density and Loss of Protective RAS in Early-stage NPDR
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Raghunandan, Sneha; Vyas, Ruchi J.; Vu, Amanda C.; Bryant, Douglas; Yaqian, Duan; Knecht, Brenda E.; Grant, Maria B.; Chalam, K. V.; Parsons-Wingerter, Patricia
2016-01-01
Our hypothesis predicts that retinal blood vessels increase in density during early-stage progression to moderate nonproliferative diabetic retinopathy (NPDR). The renin-angiotensin system (RAS) is implicated in the pathogenesis of DR and in the function of circulating angiogenic cells (CACs), a critical bone marrow-derived population that is instrumental in vascular repair.
Maps and models of density and stiffness within individual Douglas-fir trees
Christine L. Todoroki; Eini C. Lowell; Dennis P. Dykstra; David G. Briggs
2012-01-01
Spatial maps of density and stiffness patterns within individual trees were developed using two methods: (1) measured wood properties of veneer sheets; and (2) mixed effects models, to test the hypothesis that within-tree patterns could be predicted from easily measurable tree variables (height, taper, breast-height diameter, and acoustic velocity). Sample trees...
Dynamical role of predators in population cycles of a forest insect: an experimental test.
P. Turchin; A.D. Taylor; J.D. Reeve
1999-01-01
Population cycles occur frequently in forest insects.Time-series analysis of fluctuations in one such insect, the southern pine beetle (Dendroctonus frontalis), suggests that beetle dynamics are dominated by an ecological process acting in a delayed density-dependent manner.The hypothesis that delayed density-dependence in this insect results from its interaction with...
Fruits and vegetables displace, but do not decrease, total energy in school lunches.
Bontrager Yoder, Andrea B; Schoeller, Dale A
2014-08-01
The high overweight and obesity prevalence among US children is a well-established public health concern. Diet is known to play a causal role in obesity. Increasing fruit and vegetable (FV) consumption to recommended levels is proposed to help reduce obesity, because their bulk and low energy density are believed to reduce energy-dense food consumption (volume displacement hypothesis). This study tests this hypothesis at the lunch meal among upper-elementary students participating in a Farm to School (F2S) program. Digital photographs of students' school lunch trays were visually analyzed to identify the food items and amounts that were present and consumed before and after the meal. Using the USDA Nutrient Database, total and FV-only energy were calculated for each tray. Analysis of total- and non-FV energy intake was performed according to (1) levels of FV energy intake, (2) FV energy density, and (3) previous years of Farm to School programming. Higher intake of FV energy displaced non-FV energy, but total energy did not decrease across FV energy intake groups. High-FV-energy-density trays showed lower non-FV energy intake than low-FV-energy-density trays (470±179 vs. 534±219 kcal; p<0.0001). Trays from schools with more previous years of F2S programming decreased total and non-FV energy intake from school lunches (p for trend<0.0001, both). Increased FV consumption reduces non-FV energy intake, but does not reduce total energy intake. Therefore, this study does not support the volume displacement hypothesis and suggests calorie displacement instead.
Two dimensions of trust in physicians in OECD-countries.
Saarinen, Arttu Olavi; Räsänen, Pekka; Kouvo, Antti
2016-01-01
The purpose of this paper is to analyse citizens' trust in physicians in 22 OECD countries. The authors measure trust in physicians using items on generalised and particularised trust. Individual-level data are received from the ISSP Research Group (2011). The authors also utilise macro variables drawn from different data banks. Data were analysed using descriptive statistics and xtlogit regression models. The main micro-level hypothesis is that low self-reported health is strongly associated with lower trust in physicians. The second micro-level hypothesis is that frequent meetings with physicians result in higher trust. The third micro-level hypothesis assumes that males, and older and better educated respondents, express higher trust compared to others. The first macro-level hypothesis is that lower income inequality leads to higher trust in physicians. The second macro-level hypothesis is that greater physician density leads to higher trust in physicians. The authors found that the influence of individual and macro-level characteristics varies between trust types. Results indicate that both trust types are clearly associated with individual-level determinants. However, only general trust in physicians has weak associations with macro-level indicators (mainly physician density) and therefore on institutional cross-country differences. It seems that particularised trust in a physician's skills is more restricted to the individuals' health and their own experiences meeting doctors, whereas general trust likely reflects attitudes towards the prevalent profession in the country. The findings hold significance for healthcare systems research and for research concerning social trust generally.
Hozo, Iztok; Schell, Michael J; Djulbegovic, Benjamin
2008-07-01
The absolute truth in research is unobtainable, as no evidence or research hypothesis is ever 100% conclusive. Therefore, all data and inferences can in principle be considered as "inconclusive." Scientific inference and decision-making need to take into account errors, which are unavoidable in the research enterprise. The errors can occur at the level of conclusions that aim to discern the truthfulness of research hypothesis based on the accuracy of research evidence and hypothesis, and decisions, the goal of which is to enable optimal decision-making under present and specific circumstances. To optimize the chance of both correct conclusions and correct decisions, the synthesis of all major statistical approaches to clinical research is needed. The integration of these approaches (frequentist, Bayesian, and decision-analytic) can be accomplished through formal risk:benefit (R:B) analysis. This chapter illustrates the rational choice of a research hypothesis using R:B analysis based on decision-theoretic expected utility theory framework and the concept of "acceptable regret" to calculate the threshold probability of the "truth" above which the benefit of accepting a research hypothesis outweighs its risks.
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
Latitudinal variation in reproductive strategies by the migratory Louisiana Waterthrush
Mattsson, B.J.; Latta, S.C.; Cooper, R.J.; Mulvihill, R.S.
2011-01-01
We evaluated hypotheses that seek to explain breeding strategies of the Louisiana Waterthrush (Parkesia motacilla) that vary across a latitudinal gradient. On the basis of data from 418 nests of color-banded individuals in southwestern Pennsylvania and 700 km south in the Georgia Piedmont, we found that clutch size in replacement nests and probability of renesting were significantly greater in Pennsylvania (clutch size 4.4; renesting probability 0.66) than in Georgia (clutch size 3.8; renesting probability 0.54). Contrasts of the remaining measures of breeding were not statistically significant, and, in particular, mean daily nest survival in the two study areas was nearly identical (0.974 in Pennsylvania; 0.975 in Georgia). An individual-based model of fecundity (i.e., number of fledged young per adult female), predicted that approximately half of the females in both Pennsylvania and Georgia fledge at least one young, and mean values for fecundity in Pennsylvania and Georgia were 2.28 and 1.91, respectively. On the basis of greater support for the food-limitation hypothesis than for the season-length hypothesis, the trade-off between breeding in a region with more food but making a longer migration may be greater for waterthrushes breeding farther north than for those breeding farther south. ?? The Cooper Ornithological Society 2011.
Mediating role of activity level in the depressive realism effect.
Blanco, Fernando; Matute, Helena; A Vadillo, Miguel
2012-01-01
Several classic studies have concluded that the accuracy of identifying uncontrollable situations depends heavily on depressive mood. Nondepressed participants tend to exhibit an optimistic illusion of control, whereas depressed participants tend to better detect a lack of control. Recently, we suggested that the different activity levels (measured as the probability of responding during a contingency learning task) exhibited by depressed and nondepressed individuals is partly responsible for this effect. The two studies presented in this paper provide further support for this mediational hypothesis, in which mood is the distal cause of the illusion of control operating through activity level, the proximal cause. In Study 1, the probability of responding, P(R), was found to be a mediator variable between the depressive symptoms and the judgments of control. In Study 2, we intervened directly on the mediator variable: The P(R) for both depressed and nondepressed participants was manipulated through instructions. Our results confirm that P(R) manipulation produced differences in the participants' perceptions of uncontrollability. Importantly, the intervention on the mediator variable cancelled the effect of the distal cause; the participants' judgments of control were no longer mood dependent when the P(R) was manipulated. This result supports the hypothesis that the so-called depressive realism effect is actually mediated by the probability of responding.
Lasso, E; Dalling, J W; Bermingham, E
2011-01-01
Fifty years ago, Baker and Fedorov proposed that the high species diversity of tropical forests could arise from the combined effects of inbreeding and genetic drift leading to population differentiation and eventually to sympatric speciation. Decades of research, however have failed to support the Baker–Fedorov hypothesis (BFH), and it has now been discarded in favor of a paradigm where most trees are self-incompatible or strongly outcrossing, and where long-distance pollen dispersal prevents population drift. Here, we propose that several hyper-diverse genera of tropical herbs and shrubs, including Piper (>1,000 species), may provide an exception. Species in this genus often have aggregated, high-density populations with self-compatible breeding systems; characteristics which the BFH would predict lead to high local genetic differentiation. We test this prediction for five Piper species on Barro Colorado Island, Panama, using Amplified Fragment Length Polymorphism (AFLP) markers. All species showed strong genetic structure at both fine- and large-spatial scales. Over short distances (200–750 m) populations showed significant genetic differentiation (Fst 0.11–0.46, P < 0.05), with values of spatial genetic structure that exceed those reported for other tropical tree species (Sp = 0.03–0.136). This genetic structure probably results from the combined effects of limited seed and pollen dispersal, clonal spread, and selfing. These processes are likely to have facilitated the diversification of populations in response to local natural selection or genetic drift and may explain the remarkable diversity of this rich genus. PMID:22393518
Estimating nest detection probabilities for white-winged dove nest transects in Tamaulipas, Mexico
Nichols, J.D.; Tomlinson, R.E.; Waggerman, G.
1986-01-01
Nest transects in nesting colonies provide one source of information on White-winged Dove (Zenaida asiatica asiatica) population status and reproduction. Nests are counted along transects using standardized field methods each year in Texas and northeastern Mexico by personnel associated with Mexico's Office of Flora and Fauna, the Texas Parks and Wildlife Department, and the U.S. Fish and Wildlife Service. Nest counts on transects are combined with information on the size of nesting colonies to estimate total numbers of nests in sampled colonies. Historically, these estimates have been based on the actual nest counts on transects and thus have required the assumption that all nests lying within transect boundaries are detected (seen) with a probability of one. Our objectives were to test the hypothesis that nest detection probability is one and, if rejected, to estimate this probability.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Bremer, J. E.; Harter, T.
2012-08-01
Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).
NASA Astrophysics Data System (ADS)
Stephanik, Brian Michael
This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.
Analytical approach to an integrate-and-fire model with spike-triggered adaptation
NASA Astrophysics Data System (ADS)
Schwalger, Tilo; Lindner, Benjamin
2015-12-01
The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.
Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target
Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji
2009-01-01
In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326
Cancer immunotherapy by immunosuppression.
Prehn, Richmond T; Prehn, Liisa M
2010-12-15
We have previously suggested that the stimulatory effect of a weak immune reaction on tumor growth may be necessary for the growth of incipient tumors. In the present paper, we enlarge upon and extend that idea by collecting evidence in the literature bearing upon the new hypothesis that a growing cancer, whether in man or mouse, is throughout its lifespan, probably growing and progressing because of continued immune stimulation by a weak immune reaction. We also suggest that prolonged immunosuppression might interfere with progression and thus be an aid to therapy. While most of the considerable evidence that supports the hypothesis comes from observations of experimental mouse tumors, there is suggestive evidence that human tumors may behave in much the same way, and as far as we can ascertain, there is no present evidence that necessarily refutes the hypothesis.
Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.
Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor
2015-11-01
Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
Cam, E.; Monnat, J.-Y.
2000-01-01
1. Many studies have provided evidence that first-time breeders have a lower survival, a lower probability of success, or of breeding, in the following year. Hypotheses based on reproductive costs have often been proposed to explain this. However, because of the intrinsic relationship between age and experience, the apparent inferiority of first-time breeders at the population level may result from selection, and experience may not influence performance within each individual. In this paper we address the question of phenotypic correlations between fitness components. This addresses differences in individual quality, a prerequisite for a selection process to occur. We also test the hypothesis of an influence of experience on these components while taking age and reproductive success into account: two factors likely to play a key role in a selection process. 2. Using data from a long-term study on the kittiwake, we found that first-time breeders have a lower probability of success, a lower survival and a lower probability of breeding in the next year than experienced breeders. However, neither experienced nor inexperienced breeders have a lower survival or a lower probability of breeding in the following year than birds that skipped a breeding opportunity. This suggests heterogeneity in quality among individuals. 3. Failed birds have a lower survival and a lower probability of breeding in the following year regardless of experience. This can be interpreted in the light of the selection hypothesis. The inferiority of inexperienced breeders may be linked to a higher proportion of lower-quality individuals in younger age classes. When age and breeding success are controlled for, there is no evidence of an influence of experience on survival or future breeding probability. 4. Using data from individuals whose reproductive life lasted the same number of years, we investigated the influence of experience on reproductive performance within individuals. There is no strong evidence that a process operating within individuals explains the improvement in performance observed at the population level.
NASA Astrophysics Data System (ADS)
Kastner, Joel H.; Myers, P. C.
1994-02-01
One hypothesis for the elevated abundance of Al-26 present during the formation of the solar system is that an asymptotic giant branch (AGB) star expired within the molecular cloud (MC) containing the protosolar nebula. To test this hypothesis for star-forming clouds at the present epoch, we compared nearly complete lists of rapidly mass-losing AGB stars and MCs in the solar neighborhood and identified those stars which are most likely to encounter a nearby cloud. Roughly 10 stars satisfy our selection criteria. We estimated probabilities of encounter for these stars from the position of each star relative to cloud CO emission and the likely star-cloud distance along the line of sight. Typical encounter probabilities are approximately 1%. The number of potential encounters and the probability for each star-cloud pair to result in an encounter suggests that within 1 kpc of the Sun, there is a approximately 1% chance that a given cloud will be visited by a mass-losing AGB star over the next million years. The estimate is dominated by the possibility of encounters involving the stars IRC +60041 and S Cep. Over a MC lifetime, the probability for AGB encounter may be as high as approximately 70%. We discuss the implications of these results for theories of AL-26 enrichment of processed and unprocessed meteoritic inclusions. If the Al-26 in either type of inclusion arose from AGB-MC interaction, the low probability estimated here seems to require that AGB-MC encounters trigger multiple star formation and/or that the production rate of AGB stars was higher during the epoch of solar system formation than at present. Various lines of evidence suggest only the more massive (5-8 solar mass) AGB stars can produce significant AL-26 enrichment of star-forming clouds.
Sato, Tatsuhiko; Manabe, Kentaro; Hamada, Nobuyuki
2014-01-01
The risk of internal exposure to 137Cs, 134Cs, and 131I is of great public concern after the accident at the Fukushima-Daiichi nuclear power plant. The relative biological effectiveness (RBE, defined herein as effectiveness of internal exposure relative to the external exposure to γ-rays) is occasionally believed to be much greater than unity due to insufficient discussions on the difference of their microdosimetric profiles. We therefore performed a Monte Carlo particle transport simulation in ideally aligned cell systems to calculate the probability densities of absorbed doses in subcellular and intranuclear scales for internal exposures to electrons emitted from 137Cs, 134Cs, and 131I, as well as the external exposure to 662 keV photons. The RBE due to the inhomogeneous radioactive isotope (RI) distribution in subcellular structures and the high ionization density around the particle trajectories was then derived from the calculated microdosimetric probability density. The RBE for the bystander effect was also estimated from the probability density, considering its non-linear dose response. The RBE due to the high ionization density and that for the bystander effect were very close to 1, because the microdosimetric probability densities were nearly identical between the internal exposures and the external exposure from the 662 keV photons. On the other hand, the RBE due to the RI inhomogeneity largely depended on the intranuclear RI concentration and cell size, but their maximum possible RBE was only 1.04 even under conservative assumptions. Thus, it can be concluded from the microdosimetric viewpoint that the risk from internal exposures to 137Cs, 134Cs, and 131I should be nearly equivalent to that of external exposure to γ-rays at the same absorbed dose level, as suggested in the current recommendations of the International Commission on Radiological Protection. PMID:24919099
Estimating The Probability Of Achieving Shortleaf Pine Regeneration At Variable Specified Levels
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2002-01-01
A model was developed that can be used to estimate the probability of achieving regeneration at a variety of specified stem density levels. The model was fitted to shortleaf pine (Pinus echinata Mill.) regeneration data, and can be used to estimate the probability of achieving desired levels of regeneration between 300 and 700 stems per acre 9-l 0...
Properties of Traffic Risk Coefficient
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu
2009-10-01
We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.
Exact probability distribution function for the volatility of cumulative production
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Klümper, Andreas
2018-04-01
In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in Lafond et al. (2017), which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on forecasting of the production process.
Neural implementation of operations used in quantum cognition.
Busemeyer, Jerome R; Fakhari, Pegah; Kvam, Peter
2017-11-01
Quantum probability theory has been successfully applied outside of physics to account for numerous findings from psychology regarding human judgement and decision making behavior. However, the researchers who have made these applications do not rely on the hypothesis that the brain is some type of quantum computer. This raises the question of how could the brain implement quantum algorithms other than quantum physical operations. This article outlines one way that a neural based system could perform the computations required by applications of quantum probability to human behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
Probability mass first flush evaluation for combined sewer discharges.
Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong
2010-01-01
The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.
NASA Astrophysics Data System (ADS)
Sasaki, K.; Kikuchi, S.
2014-10-01
In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.
In situ study of heavy ion irradiation response of immiscible Cu/Fe multilayers
Chen, Youxing; Li, Nan; Bufford, Daniel Charles; ...
2016-04-09
By providing active defect sinks that capture and annihilate radiation induced defect clusters immiscible metallic multilayers with incoherent interfaces can effectively reduce defect density in ion irradiated metals. Although it is anticipated that defect density within the layers should vary as a function of distance to the layer interface, there is, to date, little in situ TEM evidence to validate this hypothesis. In our study monolithic Cu films and Cu/Fe multilayers with individual layer thickness, h, of 100 and 5 nm were subjected to in situ Cu ion irradiation at room temperature to nominally 1 displacement-per-atom inside a transmission electronmore » microscope. Rapid formation and propagation of defect clusters were observed in monolithic Cu, whereas fewer defects with smaller dimensions were generated in Cu/Fe multilayers with smaller h. Moreover, in situ video shows that the cumulative defect density in Cu/Fe 100 nm multilayers indeed varies, as a function of distance to the layer interfaces, supporting a long postulated hypothesis.« less
Population density influences dispersal in female white-tailed deer
Lutz, Clayton L.; Diefenbach, Duane R.; Rosenberry, Christopher S.
2015-01-01
Dispersal behavior in white-tailed deer (Odocoileus virginianus) predominantly occurs in 1-year-old males; however, females of the same age also disperse. The timing of female dispersal during fawning season and low dispersal rates suggest that competition for mates and reduced inbreeding are not ultimate causes of female dispersal, as suggested for males. We proposed that female dispersal is the result of competition for space when pregnant females seek to isolate themselves before and after parturition. To test this hypothesis, we conducted a meta-analysis of female dispersal rates from 12 populations of white-tailed deer and predicted dispersal rate and distance were positively related to deer density. We found a positive relationship between dispersal rate and deer per forested km2 and between dispersal distance and deer per forested km2. These results are consistent with the hypothesis that female dispersal is density-dependent and caused by the exclusion of subordinate 1-year-olds as adult females seek isolation before and after parturition.
Statistical analysis of dislocations and dislocation boundaries from EBSD data.
Moussa, C; Bernacki, M; Besnard, R; Bozzolo, N
2017-08-01
Electron BackScatter Diffraction (EBSD) is often used for semi-quantitative analysis of dislocations in metals. In general, disorientation is used to assess Geometrically Necessary Dislocations (GNDs) densities. In the present paper, we demonstrate that the use of disorientation can lead to inaccurate results. For example, using the disorientation leads to different GND density in recrystallized grains which cannot be physically justified. The use of disorientation gradients allows accounting for measurement noise and leads to more accurate results. Misorientation gradient is then used to analyze dislocations boundaries following the same principle applied on TEM data before. In previous papers, dislocations boundaries were defined as Geometrically Necessary Boundaries (GNBs) and Incidental Dislocation Boundaries (IDBs). It has been demonstrated in the past, through transmission electron microscopy data, that the probability density distribution of the disorientation of IDBs and GNBs can be described with a linear combination of two Rayleigh functions. Such function can also describe the probability density of disorientation gradient obtained through EBSD data as reported in this paper. This opens the route for determining IDBs and GNBs probability density distribution functions separately from EBSD data, with an increased statistical relevance as compared to TEM data. The method is applied on deformed Tantalum where grains exhibit dislocation boundaries, as observed using electron channeling contrast imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
Term Projects on Interstellar Comets
ERIC Educational Resources Information Center
Mack, John E.
1975-01-01
Presents two calculations of the probability of detection of an interstellar comet, under the hypothesis that such comets would escape from comet clouds similar to that believed to surround the sun. Proposes three problems, each of which would be a reasonable term project for a motivated undergraduate. (Author/MLH)
Using Astrology to Teach Research Methods to Introductory Psychology Students.
ERIC Educational Resources Information Center
Ward, Roger A.; Grasha, Anthony F.
1986-01-01
Provides a classroom demonstration designed to test an astrological hypothesis and help teach introductory psychology students about research design and data interpretation. Illustrates differences between science and nonscience, the role of theory in developing and testing hypotheses, making comparisons among groups, probability and statistical…
Melanin may promote photooxidation of linoleic acid
NASA Astrophysics Data System (ADS)
Glickman, Randolph D.; Lam, Kwok-Wai
1995-05-01
We have previously shown that laser-exposed melanin granules isolated from the retinal pigment epithelium (RPE) are capable of oxidizing ascorbic acid. We are now characterizing the reactions of light- activated melanin with other cellular components such as linoleic acid, a polyunsaturated fatty acid. Commercial linoleic acid, and melanin granules isolated from bovine RPE cells, are mixed and exposed to the broad band output of a 150 W Xenon arc lamp or the CW output of an Argon laser. Native linoleic acid is separated from its hydroperoxides by HPLC, and the relative amounts of each are detected by UV absorbance at 210 and 232 nm, respectively. Exposure of the linoleic acid alone to the xenon arc source results in production of linoleic hydroperoxides (LHP) in an intensity-dependent reaction that doubles in extent over the temperature range of 0° to 80°C. Addition of melanin granules at a density of 108 granules/ml reduces the production of LHP, probably because of light absorption and self-screening by the melanin. At or below a density of 107 granules/ml, however, the light-driven production of LHP is enhanced, especially during exposure to the blue- green output of the Argon laser. Physiological antioxidants (Vit. C,E protect the linoleic acid from photo-oxidation in the presence or absence of melanin. These observations support the hypothesis that light-activated melanin can react with some cellular components and thereby contribute to photochemical damage, especially if endogenous antioxidants are depleted.
Lei, Youming; Zheng, Fan
2016-12-01
Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Self-Supervised Dynamical Systems
NASA Technical Reports Server (NTRS)
Zak, Michail
2003-01-01
Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.
NASA Astrophysics Data System (ADS)
Meeßen, Christian; Scheck-Wenderoth, Magdalena; Sippel, Judith; Strecker, Manfred
2017-04-01
Thin- and thick-skinned deformation styles in the foreland of the central Andes are the result of ongoing crustal shortening since the early Neogene. The mechanisms proposed for these different styles range from variations in subduction angle of the Nazca plate, lithospheric thickening to variations in temperature and strength of the crystalline crust. The latter hypothesis states a cold and strong lithosphere in the foreland of the Altiplano Plateau, facilitating thin-skinned shortening. In contrast, the foreland of the Puna plateau is proposed to be characterized by a warm lithosphere and strong upper crust, resulting in thick-skinned deformation. Whilst this hypothesis has been confirmed in numerical thermomechanical experiments, there is no evidence for this mechanism from data integrative modelling. We test this hypothesis by means of three-dimensional data integrative gravity, thermal and rheological modelling. Therefore, we constructed a lithospheric-scale density model of the foreland of northern Argentina and southern Bolivia using gravity forward modelling and inversion techniques. Into this density model we implemented sediment isopachs, data from receiver functions and densities from shear-wave velocities of the upper mantle. The model was verified using the observed Bouguer gravity anomaly. By assigning thermal and rheological properties to the modelled units we are able to quantify the strength of the lithosphere and test the predictions by the thermomechanical models.
A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves
NASA Astrophysics Data System (ADS)
Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang
2018-03-01
The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.
Basin-wide variations in Amazon forest structure and function are mediated by both soils and climate
NASA Astrophysics Data System (ADS)
Quesada, C. A.; Phillips, O. L.; Schwarz, M.; Czimczik, C. I.; Baker, T. R.; Patiño, S.; Fyllas, N. M.; Hodnett, M. G.; Herrera, R.; Almeida, S.; Alvarez Dávila, E.; Arneth, A.; Arroyo, L.; Chao, K. J.; Dezzeo, N.; Erwin, T.; di Fiore, A.; Higuchi, N.; Honorio Coronado, E.; Jimenez, E. M.; Killeen, T.; Lezama, A. T.; Lloyd, G.; López-González, G.; Luizão, F. J.; Malhi, Y.; Monteagudo, A.; Neill, D. A.; Núñez Vargas, P.; Paiva, R.; Peacock, J.; Peñuela, M. C.; Peña Cruz, A.; Pitman, N.; Priante Filho, N.; Prieto, A.; Ramírez, H.; Rudas, A.; Salomão, R.; Santos, A. J. B.; Schmerler, J.; Silva, N.; Silveira, M.; Vásquez, R.; Vieira, I.; Terborgh, J.; Lloyd, J.
2012-06-01
Forest structure and dynamics vary across the Amazon Basin in an east-west gradient coincident with variations in soil fertility and geology. This has resulted in the hypothesis that soil fertility may play an important role in explaining Basin-wide variations in forest biomass, growth and stem turnover rates. Soil samples were collected in a total of 59 different forest plots across the Amazon Basin and analysed for exchangeable cations, carbon, nitrogen and pH, with several phosphorus fractions of likely different plant availability also quantified. Physical properties were additionally examined and an index of soil physical quality developed. Bivariate relationships of soil and climatic properties with above-ground wood productivity, stand-level tree turnover rates, above-ground wood biomass and wood density were first examined with multivariate regression models then applied. Both forms of analysis were undertaken with and without considerations regarding the underlying spatial structure of the dataset. Despite the presence of autocorrelated spatial structures complicating many analyses, forest structure and dynamics were found to be strongly and quantitatively related to edaphic as well as climatic conditions. Basin-wide differences in stand-level turnover rates are mostly influenced by soil physical properties with variations in rates of coarse wood production mostly related to soil phosphorus status. Total soil P was a better predictor of wood production rates than any of the fractionated organic- or inorganic-P pools. This suggests that it is not only the immediately available P forms, but probably the entire soil phosphorus pool that is interacting with forest growth on longer timescales. A role for soil potassium in modulating Amazon forest dynamics through its effects on stand-level wood density was also detected. Taking this into account, otherwise enigmatic variations in stand-level biomass across the Basin were then accounted for through the interacting effects of soil physical and chemical properties with climate. A hypothesis of self-maintaining forest dynamic feedback mechanisms initiated by edaphic conditions is proposed. It is further suggested that this is a major factor determining endogenous disturbance levels, species composition, and forest productivity across the Amazon Basin.
Quantum mechanical probability current as electromagnetic 4-current from topological EM fields
NASA Astrophysics Data System (ADS)
van der Mark, Martin B.
2015-09-01
Starting from a complex 4-potential A = αdβ we show that the 4-current density in electromagnetism and the probability current density in relativistic quantum mechanics are of identical form. With the Dirac-Clifford algebra Cl1,3 as mathematical basis, the given 4-potential allows topological solutions of the fields, quite similar to Bateman's construction, but with a double field solution that was overlooked previously. A more general nullvector condition is found and wave-functions of charged and neutral particles appear as topological configurations of the electromagnetic fields.
First-passage problems: A probabilistic dynamic analysis for degraded structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1990-01-01
Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.
Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field
Ashtiani, Payam; Denison, Adelaide
2015-01-01
Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
Parents who influence their children to become scientists: effects of gender and parental education.
Sonnert, Gerhard
2009-12-01
In this paper we report on testing the 'role-model' and 'opportunity-structure' hypotheses about the parents whom scientists mentioned as career influencers. According to the role-model hypothesis, the gender match between scientist and influencer is paramount (for example, women scientists would disproportionately often mention their mothers as career influencers). According to the opportunity-structure hypothesis, the parent's educational level predicts his/her probability of being mentioned as a career influencer (that is, parents with higher educational levels would be more likely to be named). The examination of a sample of American scientists who had received prestigious postdoctoral fellowships resulted in rejecting the role-model hypothesis and corroborating the opportunity-structure hypothesis. There were a few additional findings. First, women scientists were more likely than men scientists to mention parental influencers. Second, fathers were more likely than mothers to be mentioned as influencers. Third, an interaction was found between the scientist's gender and parental education when predicting a parent's nomination as influencer.
Applications of Bayesian Statistics to Problems in Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Meegan, Charles A.
1997-01-01
This presentation will describe two applications of Bayesian statistics to Gamma Ray Bursts (GRBS). The first attempts to quantify the evidence for a cosmological versus galactic origin of GRBs using only the observations of the dipole and quadrupole moments of the angular distribution of bursts. The cosmological hypothesis predicts isotropy, while the galactic hypothesis is assumed to produce a uniform probability distribution over positive values for these moments. The observed isotropic distribution indicates that the Bayes factor for the cosmological hypothesis over the galactic hypothesis is about 300. Another application of Bayesian statistics is in the estimation of chance associations of optical counterparts with galaxies. The Bayesian approach is preferred to frequentist techniques here because the Bayesian approach easily accounts for galaxy mass distributions and because one can incorporate three disjoint hypotheses: (1) bursts come from galactic centers, (2) bursts come from galaxies in proportion to luminosity, and (3) bursts do not come from external galaxies. This technique was used in the analysis of the optical counterpart to GRB970228.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
Davis, Hayley; Ritchie, Euan G; Avitabile, Sarah; Doherty, Tim; Nimmo, Dale G
2018-04-01
Fire shapes the composition and functioning of ecosystems globally. In many regions, fire is actively managed to create diverse patch mosaics of fire-ages under the assumption that a diversity of post-fire-age classes will provide a greater variety of habitats, thereby enabling species with differing habitat requirements to coexist, and enhancing species diversity (the pyrodiversity begets biodiversity hypothesis). However, studies provide mixed support for this hypothesis. Here, using termite communities in a semi-arid region of southeast Australia, we test four key assumptions of the pyrodiversity begets biodiversity hypothesis (i) that fire shapes vegetation structure over sufficient time frames to influence species' occurrence, (ii) that animal species are linked to resources that are themselves shaped by fire and that peak at different times since fire, (iii) that species' probability of occurrence or abundance peaks at varying times since fire and (iv) that providing a diversity of fire-ages increases species diversity at the landscape scale. Termite species and habitat elements were sampled in 100 sites across a range of fire-ages, nested within 20 landscapes chosen to represent a gradient of low to high pyrodiversity. We used regression modelling to explore relationships between termites, habitat and fire. Fire affected two habitat elements (coarse woody debris and the cover of woody vegetation) that were associated with the probability of occurrence of three termite species and overall species richness, thus supporting the first two assumptions of the pyrodiversity hypothesis. However, this did not result in those species or species richness being affected by fire history per se. Consequently, landscapes with a low diversity of fire histories had similar numbers of termite species as landscapes with high pyrodiversity. Our work suggests that encouraging a diversity of fire-ages for enhancing termite species richness in this study region is not necessary.
Davis, Hayley; Ritchie, Euan G.; Avitabile, Sarah; Doherty, Tim
2018-01-01
Fire shapes the composition and functioning of ecosystems globally. In many regions, fire is actively managed to create diverse patch mosaics of fire-ages under the assumption that a diversity of post-fire-age classes will provide a greater variety of habitats, thereby enabling species with differing habitat requirements to coexist, and enhancing species diversity (the pyrodiversity begets biodiversity hypothesis). However, studies provide mixed support for this hypothesis. Here, using termite communities in a semi-arid region of southeast Australia, we test four key assumptions of the pyrodiversity begets biodiversity hypothesis (i) that fire shapes vegetation structure over sufficient time frames to influence species' occurrence, (ii) that animal species are linked to resources that are themselves shaped by fire and that peak at different times since fire, (iii) that species’ probability of occurrence or abundance peaks at varying times since fire and (iv) that providing a diversity of fire-ages increases species diversity at the landscape scale. Termite species and habitat elements were sampled in 100 sites across a range of fire-ages, nested within 20 landscapes chosen to represent a gradient of low to high pyrodiversity. We used regression modelling to explore relationships between termites, habitat and fire. Fire affected two habitat elements (coarse woody debris and the cover of woody vegetation) that were associated with the probability of occurrence of three termite species and overall species richness, thus supporting the first two assumptions of the pyrodiversity hypothesis. However, this did not result in those species or species richness being affected by fire history per se. Consequently, landscapes with a low diversity of fire histories had similar numbers of termite species as landscapes with high pyrodiversity. Our work suggests that encouraging a diversity of fire-ages for enhancing termite species richness in this study region is not necessary. PMID:29765661
Moon origin - The impact-trigger hypothesis
NASA Technical Reports Server (NTRS)
Hartmann, William K.
1986-01-01
Arguments in favor of the impact-trigger model of lunar origin are presented. Lunar properties favoring this hypothesis include: (1) lunar iron and volatile deficiency; (2) angular momentum of the earth-moon system; and (3) similar O isotopes, bulk iron contents, and densities of earth's mantle and the moon. It is shown that the intense early bombardment averaged during earth's formation was several billion times the present meteoritic mass flux, consistent with a giant impact.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Causal illusions in children when the outcome is frequent
2017-01-01
Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent. PMID:28898294
2010-01-01
Background The search for sickle cell disease (SCD) prognosis biomarkers is a challenge. These markers identification can help to establish further therapy, later severe clinical complications and with patients follow-up. We attempted to study a possible involvement of levels of high-density lipoprotein cholesterol (HDL-C) in steady-state children with SCD, once that this lipid marker has been correlated with anti-inflammatory, anti-oxidative, anti-aggregation, anti-coagulant and pro-fibrinolytic activities, important aspects to be considered in sickle cell disease pathogenesis. Methods We prospectively analyzed biochemical, inflammatory and hematological biomarkers of 152 steady-state infants with SCD and 132 healthy subjects using immunochemistry, immunoassay and electronic cell counter respectively. Clinical data were collected from patient medical records. Results Of the 152 infants investigated had a significant positive association of high-density lipoprotein cholesterol with hemoglobin (P < 0.001), hematocrit (P < 0.001) and total cholesterol (P < 0.001) and a negative significant association with reticulocytes (P = 0.046), leukocytes (P = 0.015), monocytes (P = 0.004) and platelets (P = 0.005), bilirubins [total bilirubin (P < 0.001), direct bilirubin (P < 0.001) and indirect bilirubin (P < 0.001], iron (P < 0.001), aminotransferases [aspartate aminotransferase (P = 0.004), alanine aminotransferase (P = 0.035)], lactate dehydrogenase (P < 0.001), urea (P = 0.030), alpha 1-antitrypsin (P < 0.001), very low-density lipoprotein cholesterol (P = 0.003), triglycerides (P = 0.005) and hemoglobin S (P = 0.002). Low high-density lipoprotein cholesterol concentration was associated with the history of cardiac abnormalities (P = 0.025), pneumonia (P = 0.033) and blood transfusion use (P = 0.025). Lipids and inflammatory markers were associated with the presence of cholelithiasis. Conclusions We hypothesize that some SCD patients can have a specific dyslipidemic subphenotype characterized by low HDL-C with hypertriglyceridemia and high VLDL-C in association with other biomarkers, including those related to inflammation. This represents an important step toward a more reliable clinical prognosis. Additional studies are warranted to test this hypothesis and the probably mechanisms involved in this complex network of markers and their role in SCD pathogenesis. PMID:20799970
NASA Astrophysics Data System (ADS)
Kogure, Toshihiro; Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Checa, Antonio G.; Sasaki, Takenori; Nagasawa, Hiromichi
2014-07-01
{110} twin density in aragonites constituting various microstructures of molluscan shells has been characterized using X-ray diffraction (XRD) and transmission electron microscopy (TEM), to find the factors that determine the density in the shells. Several aragonite crystals of geological origin were also investigated for comparison. The twin density is strongly dependent on the microstructures and species of the shells. The nacreous structure has a very low twin density regardless of the shell classes. On the other hand, the twin density in the crossed-lamellar (CL) structure has large variation among classes or subclasses, which is mainly related to the crystallographic direction of the constituting aragonite fibers. TEM observation suggests two types of twin structures in aragonite crystals with dense {110} twins: rather regulated polysynthetic twins with parallel twin planes, and unregulated polycyclic ones with two or three directions for the twin planes. The former is probably characteristic in the CL structures of specific subclasses of Gastropoda. The latter type is probably related to the crystal boundaries dominated by (hk0) interfaces in the microstructures with preferred orientation of the c-axis, and the twin density is mainly correlated to the crystal size in the microstructures.
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1989-01-01
In 1983 and 1984, the Infrared Astronomical Satellite (IRAS) detected 5,425 stellar objects and measured their infrared spectra. In 1987 a program called AUTOCLASS used Bayesian inference methods to discover the classes present in these data and determine the most probable class of each object, revealing unknown phenomena in astronomy. AUTOCLASS has rekindled the old debate on the suitability of Bayesian methods, which are computationally intensive, interpret probabilities as plausibility measures rather than frequencies, and appear to depend on a subjective assessment of the probability of a hypothesis before the data were collected. Modern statistical methods have, however, recently been shown to also depend on subjective elements. These debates bring into question the whole tradition of scientific objectivity and offer scientists a new way to take responsibility for their findings and conclusions.
QUANTITATIVE EVALUATION OF THE HYPOTHESIS THAT BL LACERTAE OBJECTS ARE QSO REMNANTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borra, E. F.
2014-11-20
We evaluate with numerical simulations the hypothesis that BL Lacertae objects (BLLs) are the remnants of quasi-stellar objects. This hypothesis is based on their highly peculiar redshift evolution. They have a comoving space density that increases with decreasing redshift, contrary to all other active galactic nuclei. We assume that relativistic jets are below detection in young radio-quiet quasars and increase in strength with cosmic time so that they eventually are detected as BLLs. Our numerical simulations fit very well the observed redshift distributions of BLLs. There are strong indications that only the high-synchrotron-peaked BLLs could be QSO remnants.
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
Educability and Group Differences.
ERIC Educational Resources Information Center
Jensen, Arthur R.
This pivotal analysis of the genetic factor in intelligence and educability argues that those qualities which seem most closely related to educability cannot be accounted for by a traditional environmentalist hypothesis. It is more probable that they have a substantial genetic basis. Educability, as defined in this book, is the ability to learn…
Using the Nobel Laureates in Economics to Teach Quantitative Methods
ERIC Educational Resources Information Center
Becker, William E.; Greene, William H.
2005-01-01
The authors show how the work of Nobel Laureates in economics can enhance student understanding and bring them up to date on topics such as probability, uncertainty and decision theory, hypothesis testing, regression to the mean, instrumental variable techniques, discrete choice modeling, and time-series analysis. (Contains 2 notes.)
Non-Bayesian Inference: Causal Structure Trumps Correlation
ERIC Educational Resources Information Center
Bes, Benedicte; Sloman, Steven; Lucas, Christopher G.; Raufaste, Eric
2012-01-01
The study tests the hypothesis that conditional probability judgments can be influenced by causal links between the target event and the evidence even when the statistical relations among variables are held constant. Three experiments varied the causal structure relating three variables and found that (a) the target event was perceived as more…
ERIC Educational Resources Information Center
Stallings, William M.
In the educational research literature alpha, the a priori level of significance, and p, the a posteriori probability of obtaining a test statistic of at least a certain value when the null hypothesis is true, are often confused. Explanations for this confusion are offered. Paradoxically, alpha retains a prominent place in textbook discussions of…
A Paradigm for the Telephonic Assessment of Suicidal Ideation
ERIC Educational Resources Information Center
Halderman, Brent L.; Eyman, James R.; Kerner, Lisa; Schlacks, Bill
2009-01-01
A three-stage paradigm for telephonically assessing suicidal risk and triaging suicidal callers as practiced in an Employee Assistance Program Call Center was investigated. The first hypothesis was that the use of the procedure would increase the probability that callers would accept the clinician's recommendations, evidenced by fewer police…
Constructing the Exact Significance Level for a Person-Fit Statistic.
ERIC Educational Resources Information Center
Liou, Michelle; Chang, Chih-Hsin
1992-01-01
An extension is proposed for the network algorithm introduced by C.R. Mehta and N.R. Patel to construct exact tail probabilities for testing the general hypothesis that item responses are distributed according to the Rasch model. A simulation study indicates the efficiency of the algorithm. (SLD)
Bayesian Posterior Odds Ratios: Statistical Tools for Collaborative Evaluations
ERIC Educational Resources Information Center
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon
2018-01-01
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
NASA Technical Reports Server (NTRS)
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
Skipping of Chinese characters does not rely on word-based processing.
Lin, Nan; Angele, Bernhard; Hua, Huimin; Shen, Wei; Zhou, Junyi; Li, Xingshan
2018-02-01
Previous eye-movement studies have indicated that people tend to skip extremely high-frequency words in sentence reading, such as "the" in English and "/de" in Chinese. Two alternative hypotheses have been proposed to explain how this frequent skipping happens in Chinese reading: one assumes that skipping happens when the preview has been fully identified at the word level (word-based skipping); the other assumes that skipping happens whenever the preview character is easy to identify regardless of whether lexical processing has been completed or not (character-based skipping). Using the gaze-contingent display change paradigm, we examined the two hypotheses by substituting the preview of the third character of a four-character Chinese word with the high-frequency Chinese character "/de", which should disrupt the ongoing word-level processing. The character-based skipping hypothesis predicts that this manipulation will enhance the skipping probability of the target character (i.e., the third character of the target word), because the character "/de" has much higher character frequency than the original character. The word-based skipping hypothesis instead predicts a reduction of the skipping probability of the target character because the presence of the character "/de" is lexically infelicitous at word level. The results supported the character-based skipping hypothesis, indicating that in Chinese reading the decision of skipping a character can be made before integrating it into a word.
Spacecraft Collision Avoidance
NASA Astrophysics Data System (ADS)
Bussy-Virat, Charles
The rapid increase of the number of objects in orbit around the Earth poses a serious threat to operational spacecraft and astronauts. In order to effectively avoid collisions, mission operators need to assess the risk of collision between the satellite and any other object whose orbit is likely to approach its trajectory. Several algorithms predict the probability of collision but have limitations that impair the accuracy of the prediction. An important limitation is that uncertainties in the atmospheric density are usually not taken into account in the propagation of the covariance matrix from current epoch to closest approach time. The Spacecraft Orbital Characterization Kit (SpOCK) was developed to accurately predict the positions and velocities of spacecraft. The central capability of SpOCK is a high accuracy numerical propagator of spacecraft orbits and computations of ancillary parameters. The numerical integration uses a comprehensive modeling of the dynamics of spacecraft in orbit that includes all the perturbing forces that a spacecraft is subject to in orbit. In particular, the atmospheric density is modeled by thermospheric models to allow for an accurate representation of the atmospheric drag. SpOCK predicts the probability of collision between two orbiting objects taking into account the uncertainties in the atmospheric density. Monte Carlo procedures are used to perturb the initial position and velocity of the primary and secondary spacecraft from their covariance matrices. Developed in C, SpOCK supports parallelism to quickly assess the risk of collision so it can be used operationally in real time. The upper atmosphere of the Earth is strongly driven by the solar activity. In particular, abrupt transitions from slow to fast solar wind cause important disturbances of the atmospheric density, hence of the drag acceleration that spacecraft are subject to. The Probability Distribution Function (PDF) model was developed to predict the solar wind speed five days in advance. In particular, the PDF model is able to predict rapid enhancements in the solar wind speed. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. En-semble forecasts provide the forecasters with an estimation of the uncertainty in the prediction, which can be used to derive uncertainties in the atmospheric density and in the drag acceleration. The dissertation then demonstrates that uncertainties in the atmospheric density result in large uncertainties in the prediction of the probability of collision. As an example, the effects of a geomagnetic storm on the probability of collision are illustrated. The research aims at providing tools and analyses that help understand and predict the effects of uncertainties in the atmospheric density on the probability of collision. The ultimate motivation is to support mission operators in making the correct decision with regard to a potential collision avoidance maneuver by providing an uncertainty on the prediction of the probability of collision instead of a single value. This approach can help avoid performing unnecessary costly maneuvers, while making sure that the risk of collision is fully evaluated.
Cartoni, Emilio; Moretta, Tania; Puglisi-Allegra, Stefano; Cabib, Simona; Baldassarre, Gianluca
2015-01-01
Goal-directed behavior is influenced by environmental cues: in particular, cues associated with a reward can bias action choice toward actions directed to that same reward. This effect is studied experimentally as specific Pavlovian-instrumental transfer (specific PIT). We have investigated the hypothesis that cues associated to an outcome elicit specific PIT by rising the estimates of reward probability of actions associated to that same outcome. In other words, cues reduce the uncertainty on the efficacy of instrumental actions. We used a human PIT experimental paradigm to test the effects of two different instrumental contingencies: one group of participants had a 33% chance of being rewarded for each button press, while another had a 100% chance. The group trained with 33% reward probability showed a stronger PIT effect than the 100% group, in line with the hypothesis that Pavlovian cues linked to an outcome work by reducing the uncertainty of receiving it. The 100% group also showed a significant specific PIT effect, highlighting additional factors that could contribute to specific PIT beyond the instrumental training contingency. We hypothesize that the uncertainty about reward delivery due to testing in extinction might be one of these factors. These results add knowledge on how goal-directed behavior is influenced by the presence of environmental cues associated with a reward: such influence depends on the probability that we have to reach a reward, namely when there is less chance of getting a reward we are more influenced by cues associated with it, and vice versa. PMID:26635645
Brownian Motion with Active Fluctuations
NASA Astrophysics Data System (ADS)
Romanczuk, Pawel; Schimansky-Geier, Lutz
2011-06-01
We study the effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions. We distinguish between passive and active fluctuations. Passive fluctuations (e.g., thermal fluctuations) are independent of the orientation of the particle. In contrast, active ones point parallel or perpendicular to the time dependent orientation of the particle. We derive analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles, which yields an increased probability of low speeds in the presence of active fluctuations in comparison to the case of purely passive fluctuations. As a consequence, we predict sharply peaked Cartesian velocity probability densities at the origin. Finally, we show that such a behavior may also occur in non-Gaussian active fluctuations and discuss briefly correlations of the fluctuating stochastic forces.
Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2009-04-17
We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
Importance of structural stability to success of mourning dove nests
Coon, R.A.; Nichols, J.D.; Percival, H.F.
1981-01-01
Studies of nest-site selection and nesting habitats often involve a "characterization" of nests and of habitats in which nests are found. Our objective in the present work is to identify nest-site characteristics that are associated with variation in components of Mourning Dove (Zenaida macroura) fitness (e.g. the probability of a nest succeeding), as opposed to simply "characterizing" dove nest sites. If certain nest- site characteristics affect the probability that a nest will succeed, then we suspect that these characteristics will be associated with either concealment (the probability of detection by certain predators) or structural stability (the probability of eggs or entire nests falling to the ground as a result of wind, rain storms, parental activity, etc.). Although other workers agree that structural stability is an important determinant of Mourning Dove nesting success (e.g. McClure 1944: 384; Woolfenden and Rohwer 1969: 59), we are aware of no actual tests of this hypothesis.
Density PDFs of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2012-09-01
The probability distribution functions (PDFs) of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5∘ and |b|≥ 5∘ are considered separately. Our results provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.
Cancer immunotherapy by immunosuppression
2010-01-01
We have previously suggested that the stimulatory effect of a weak immune reaction on tumor growth may be necessary for the growth of incipient tumors. In the present paper, we enlarge upon and extend that idea by collecting evidence in the literature bearing upon this new hypothesis that a growing cancer, whether in man or mouse, is throughout its lifespan, probably growing and progressing because of continued immune stimulation by a weak immune reaction. We also suggest that prolonged immunosuppression might interfere with progression and thus be an aid to therapy. While most of the considerable evidence that supports the hypothesis comes from observations of experimental mouse tumors, there is suggestive evidence that human tumors may behave in much the same way, and as far as we can ascertain, there is no present evidence that necessarily refutes the hypothesis. PMID:21159199
Dynamic test input generation for multiple-fault isolation
NASA Technical Reports Server (NTRS)
Schaefer, Phil
1990-01-01
Recent work is Causal Reasoning has provided practical techniques for multiple fault diagnosis. These techniques provide a hypothesis/measurement diagnosis cycle. Using probabilistic methods, they choose the best measurements to make, then update fault hypotheses in response. For many applications such as computers and spacecraft, few measurement points may be accessible, or values may change quickly as the system under diagnosis operates. In these cases, a hypothesis/measurement cycle is insufficient. A technique is presented for a hypothesis/test-input/measurement diagnosis cycle. In contrast to generating tests a priori for determining device functionality, it dynamically generates tests in response to current knowledge about fault probabilities. It is shown how the mathematics previously used for measurement specification can be applied to the test input generation process. An example from an efficient implementation called Multi-Purpose Causal (MPC) is presented.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
Hypothesis Testing as an Act of Rationality
NASA Astrophysics Data System (ADS)
Nearing, Grey
2017-04-01
Statistical hypothesis testing is ad hoc in two ways. First, setting probabilistic rejection criteria is, as Neyman (1957) put it, an act of will rather than an act of rationality. Second, physical theories like conservation laws do not inherently admit probabilistic predictions, and so we must use what are called epistemic bridge principles to connect model predictions with the actual methods of hypothesis testing. In practice, these bridge principles are likelihood functions, error functions, or performance metrics. I propose that the reason we are faced with these problems is because we have historically failed to account for a fundamental component of basic logic - namely the portion of logic that explains how epistemic states evolve in the presence of empirical data. This component of Cox' (1946) calculitic logic is called information theory (Knuth, 2005), and adding information theory our hypothetico-deductive account of science yields straightforward solutions to both of the above problems. This also yields a straightforward method for dealing with Popper's (1963) problem of verisimilitude by facilitating a quantitative approach to measuring process isomorphism. In practice, this involves data assimilation. Finally, information theory allows us to reliably bound measures of epistemic uncertainty, thereby avoiding the problem of Bayesian incoherency under misspecified priors (Grünwald, 2006). I therefore propose solutions to four of the fundamental problems inherent in both hypothetico-deductive and/or Bayesian hypothesis testing. - Neyman (1957) Inductive Behavior as a Basic Concept of Philosophy of Science. - Cox (1946) Probability, Frequency and Reasonable Expectation. - Knuth (2005) Lattice Duality: The Origin of Probability and Entropy. - Grünwald (2006). Bayesian Inconsistency under Misspecification. - Popper (1963) Conjectures and Refutations: The Growth of Scientific Knowledge.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
2012-01-01
Background Oviposition-site choice is an essential component of the life history of all mosquito species. According to the oviposition-preference offspring-performance (P-P) hypothesis, if optimizing offspring performance and fitness ensures high overall reproductive fitness for a given species, the female should accurately assess details of the heterogeneous environment and lay her eggs preferentially in sites with conditions more suitable to offspring. Methods We empirically tested the P-P hypothesis using the mosquito species Aedes albopictus by artificially manipulating two habitat conditions: diet (measured as mg of food added to a container) and conspecific density (CD; number of pre-existing larvae of the same species). Immature development (larval mortality, development time to pupation and time to emergence) and fitness (measured as wing length) were monitored from first instar through adult emergence using a factorial experimental design over two ascending gradients of diet (2.0, 3.6, 7.2 and 20 mg food/300 ml water) and CD (0, 20, 40 and 80 larvae/300 ml water). Treatments that exerted the most contrasting values of larval performance were recreated in a second experiment consisting of single-female oviposition site selection assay. Results Development time decreased as food concentration increased, except from 7.2 mg to 20.0 mg (Two-Way CR ANOVA Post-Hoc test, P > 0.1). Development time decreased also as conspecific density increased from zero to 80 larvae (Two-Way CR ANOVA Post-Hoc test, P < 0.5). Combined, these results support the role of density-dependent competition for resources as a limiting factor for mosquito larval performance. Oviposition assays indicated that female mosquitoes select for larval habitats with conspecifics and that larval density was more important than diet in driving selection for oviposition sites. Conclusions This study supports predictions of the P-P hypothesis and provides a mechanistic understanding of the underlying factors driving mosquito oviposition site selection. PMID:23044004
Acceleration of exotic plant invasion in a forested ecosystem by a generalist herbivore.
Eschtruth, Anne K; Battles, John J
2009-04-01
The successful invasion of exotic plants is often attributed to the absence of coevolved enemies in the introduced range (i.e., the enemy release hypothesis). Nevertheless, several components of this hypothesis, including the role of generalist herbivores, remain relatively unexplored. We used repeated censuses of exclosures and paired controls to investigate the role of a generalist herbivore, white-tailed deer (Odocoileus virginianus), in the invasion of 3 exotic plant species (Microstegium vimineum, Alliaria petiolata, and Berberis thunbergii) in eastern hemlock (Tsuga canadensis) forests in New Jersey and Pennsylvania (U.S.A.). This work was conducted in 10 eastern hemlock (T. canadensis) forests that spanned gradients in deer density and in the severity of canopy disturbance caused by an introduced insect pest, the hemlock woolly adelgid (Adelges tsugae). We used maximum likelihood estimation and information theoretics to quantify the strength of evidence for alternative models of the influence of deer density and its interaction with the severity of canopy disturbance on exotic plant abundance. Our results were consistent with the enemy release hypothesis in that exotic plants gained a competitive advantage in the presence of generalist herbivores in the introduced range. The abundance of all 3 exotic plants increased significantly more in the control plots than in the paired exclosures. For all species, the inclusion of canopy disturbance parameters resulted in models with substantially greater support than the deer density only models. Our results suggest that white-tailed deer herbivory can accelerate the invasion of exotic plants and that canopy disturbance can interact with herbivory to magnify the impact. In addition, our results provide compelling evidence of nonlinear relationships between deer density and the impact of herbivory on exotic species abundance. These findings highlight the important role of herbivore density in determining impacts on plant abundance and provide evidence of the operation of multiple mechanisms in exotic plant invasion.
Fruits and Vegetables Displace, But Do Not Decrease, Total Energy in School Lunches
Schoeller, Dale A.
2014-01-01
Abstract Background: The high overweight and obesity prevalence among US children is a well-established public health concern. Diet is known to play a causal role in obesity. Increasing fruit and vegetable (FV) consumption to recommended levels is proposed to help reduce obesity, because their bulk and low energy density are believed to reduce energy-dense food consumption (volume displacement hypothesis). This study tests this hypothesis at the lunch meal among upper-elementary students participating in a Farm to School (F2S) program. Methods: Digital photographs of students' school lunch trays were visually analyzed to identify the food items and amounts that were present and consumed before and after the meal. Using the USDA Nutrient Database, total and FV-only energy were calculated for each tray. Analysis of total- and non-FV energy intake was performed according to (1) levels of FV energy intake, (2) FV energy density, and (3) previous years of Farm to School programming. Results: Higher intake of FV energy displaced non-FV energy, but total energy did not decrease across FV energy intake groups. High-FV-energy-density trays showed lower non-FV energy intake than low-FV-energy-density trays (470±179 vs. 534±219 kcal; p<0.0001). Trays from schools with more previous years of F2S programming decreased total and non-FV energy intake from school lunches (p for trend<0.0001, both). Conclusions: Increased FV consumption reduces non-FV energy intake, but does not reduce total energy intake. Therefore, this study does not support the volume displacement hypothesis and suggests calorie displacement instead. PMID:24988122
Lantz, Van; Martínez-Espiñeira, Roberto
2008-04-01
The traditional environmental Kuznets curve (EKC) hypothesis postulates that environmental degradation follows an inverted U-shaped relationship with gross domestic product (GDP) per capita. We tested the EKC hypothesis with bird populations in 5 different habitats as environmental quality indicators. Because birds are considered environmental goods, for them the EKC hypothesis would instead be associated with a U-shaped relationship between bird populations and GDP per capita. In keeping with the literature, we included other variables in the analysis-namely, human population density and time index variables (the latter variable captured the impact of persistent and exogenous climate and/or policy changes on bird populations over time). Using data from 9 Canadian provinces gathered over 37 years, we used a generalized least-squares regression for each bird habitat type, which accounted for the panel structure of the data, the cross-sectional dependence across provinces in the residuals, heteroskedasticity, and fixed- or random-effect specifications of the models. We found evidence that supports the EKC hypothesis for 3 of the 5 bird population habitat types. In addition, the relationship between human population density and the different bird populations varied, which emphasizes the complex nature of the impact that human populations have on the environment. The relationship between the time-index variable and the different bird populations also varied, which indicates there are other persistent and significant influences on bird populations over time. Overall our EKC results were consistent with those found for threatened bird species, indicating that economic prosperity does indeed act to benefit some bird populations.
The Butterflies of Barro Colorado Island, Panama: Local Extinction since the 1930s
Basset, Yves; Barrios, Héctor; Segar, Simon; Srygley, Robert B.; Aiello, Annette; Warren, Andrew D.; Delgado, Francisco; Coronado, James; Lezcano, Jorge; Arizala, Stephany; Rivera, Marleny; Perez, Filonila; Bobadilla, Ricardo; Lopez, Yacksecari; Ramirez, José Alejandro
2015-01-01
Few data are available about the regional or local extinction of tropical butterfly species. When confirmed, local extinction was often due to the loss of host-plant species. We used published lists and recent monitoring programs to evaluate changes in butterfly composition on Barro Colorado Island (BCI, Panama) between an old (1923–1943) and a recent (1993–2013) period. Although 601 butterfly species have been recorded from BCI during the 1923–2013 period, we estimate that 390 species are currently breeding on the island, including 34 cryptic species, currently only known by their DNA Barcode Index Number. Twenty-three butterfly species that were considered abundant during the old period could not be collected during the recent period, despite a much higher sampling effort in recent times. We consider these species locally extinct from BCI and they conservatively represent 6% of the estimated local pool of resident species. Extinct species represent distant phylogenetic branches and several families. The butterfly traits most likely to influence the probability of extinction were host growth form, wing size and host specificity, independently of the phylogenetic relationships among butterfly species. On BCI, most likely candidates for extinction were small hesperiids feeding on herbs (35% of extinct species). However, contrary to our working hypothesis, extinction of these species on BCI cannot be attributed to loss of host plants. In most cases these host plants remain extant, but they probably subsist at lower or more fragmented densities. Coupled with low dispersal power, this reduced availability of host plants has probably caused the local extinction of some butterfly species. Many more bird than butterfly species have been lost from BCI recently, confirming that small preserves may be far more effective at conserving invertebrates than vertebrates and, therefore, should not necessarily be neglected from a conservation viewpoint. PMID:26305111
A Learning-Based Approach for IP Geolocation
NASA Astrophysics Data System (ADS)
Eriksson, Brian; Barford, Paul; Sommers, Joel; Nowak, Robert
The ability to pinpoint the geographic location of IP hosts is compelling for applications such as on-line advertising and network attack diagnosis. While prior methods can accurately identify the location of hosts in some regions of the Internet, they produce erroneous results when the delay or topology measurement on which they are based is limited. The hypothesis of our work is that the accuracy of IP geolocation can be improved through the creation of a flexible analytic framework that accommodates different types of geolocation information. In this paper, we describe a new framework for IP geolocation that reduces to a machine-learning classification problem. Our methodology considers a set of lightweight measurements from a set of known monitors to a target, and then classifies the location of that target based on the most probable geographic region given probability densities learned from a training set. For this study, we employ a Naive Bayes framework that has low computational complexity and enables additional environmental information to be easily added to enhance the classification process. To demonstrate the feasibility and accuracy of our approach, we test IP geolocation on over 16,000 routers given ping measurements from 78 monitors with known geographic placement. Our results show that the simple application of our method improves geolocation accuracy for over 96% of the nodes identified in our data set, with on average accuracy 70 miles closer to the true geographic location versus prior constraint-based geolocation. These results highlight the promise of our method and indicate how future expansion of the classifier can lead to further improvements in geolocation accuracy.
NASA Astrophysics Data System (ADS)
Benda, L. E.
2009-12-01
Stochastic geomorphology refers to the interaction of the stochastic field of sediment supply with hierarchically branching river networks where erosion, sediment flux and sediment storage are described by their probability densities. There are a number of general principles (hypotheses) that stem from this conceptual and numerical framework that may inform the science of erosion and sedimentation in river basins. Rainstorms and other perturbations, characterized by probability distributions of event frequency and magnitude, stochastically drive sediment influx to channel networks. The frequency-magnitude distribution of sediment supply that is typically skewed reflects strong interactions among climate, topography, vegetation, and geotechnical controls that vary between regions; the distribution varies systematically with basin area and the spatial pattern of erosion sources. Probability densities of sediment flux and storage evolve from more to less skewed forms downstream in river networks due to the convolution of the population of sediment sources in a watershed that should vary with climate, network patterns, topography, spatial scale, and degree of erosion asynchrony. The sediment flux and storage distributions are also transformed downstream due to diffusion, storage, interference, and attrition. In stochastic systems, the characteristically pulsed sediment supply and transport can create translational or stationary-diffusive valley and channel depositional landforms, the geometries of which are governed by sediment flux-network interactions. Episodic releases of sediment to the network can also drive a system memory reflected in a Hurst Effect in sediment yields and thus in sedimentological records. Similarly, discreet events of punctuated erosion on hillslopes can lead to altered surface and subsurface properties of a population of erosion source areas that can echo through time and affect subsequent erosion and sediment flux rates. Spatial patterns of probability densities have implications for the frequency and magnitude of sediment transport and storage and thus for the formation of alluvial and colluvial landforms throughout watersheds. For instance, the combination and interference of probability densities of sediment flux at confluences creates patterns of riverine heterogeneity, including standing waves of sediment with associated age distributions of deposits that can vary from younger to older depending on network geometry and position. Although the watershed world of probability densities is rarified and typically confined to research endeavors, it has real world implications for the day-to-day work on hillslopes and in fluvial systems, including measuring erosion, sediment transport, mapping channel morphology and aquatic habitats, interpreting deposit stratigraphy, conducting channel restoration, and applying environmental regulations. A question for the geomorphology community is whether the stochastic framework is useful for advancing our understanding of erosion and sedimentation and whether it should stimulate research to further develop, refine and test these and other principles. For example, a changing climate should lead to shifts in probability densities of erosion, sediment flux, storage, and associated habitats and thus provide a useful index of climate change in earth science forecast models.
Rare targets are less susceptible to attention capture once detection has begun.
Hon, Nicholas; Ng, Gavin; Chan, Gerald
2016-04-01
Rare or low probability targets are detected more slowly and/ or less accurately than higher probability counterparts. Various proposals have implicated perceptual and response-based processes in this deficit. Recent evidence, however, suggests that it is attentional in nature, with low probability targets requiring more attentional resources than high probability ones to detect. This difference in attentional requirements, in turn, suggests the possibility that low and high probability targets may have different susceptibilities to attention capture, which is also known to be resource-dependent. Supporting this hypothesis, we found that, once attentional resources have begun to be engaged by detection processes, low, but not high, probability targets have a reduced susceptibility to capture. Our findings speak to several issues. First, they indicate that the likelihood of attention capture occurring when a given task-relevant stimulus is being processed is dependent, to some extent, on how said stimulus is represented within mental task sets. Second, they provide added support for the idea that the behavioural deficit associated with low probability targets is attention-based. Finally, the current data point to reduced top-down biasing of target templates as a likely mechanism underlying the attentional locus of the deficit in question.
NASA Astrophysics Data System (ADS)
Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.
2005-05-01
Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.
Coulomb Impurity Potential RbCl Quantum Pseudodot Qubit
NASA Astrophysics Data System (ADS)
Ma, Xin-Jun; Qi, Bin; Xiao, Jing-Lin
2015-08-01
By employing a variational method of Pekar type, we study the eigenenergies and the corresponding eigenfunctions of the ground and the first-excited states of an electron strongly coupled to electron-LO in a RbCl quantum pseudodot (QPD) with a hydrogen-like impurity at the center. This QPD system may be used as a two-level quantum qubit. The expressions of electron's probability density versus time and the coordinates, and the oscillating period versus the Coulombic impurity potential and the polaron radius have been derived. The investigated results indicate ① that the probability density of the electron oscillates in the QPD with a certain oscillating period of , ② that due to the presence of the asymmetrical potential in the z direction of the RbCl QPD, the electron probability density shows double-peak configuration, whereas there is only one peak if the confinement is a two-dimensional symmetric structure in the xy plane of the QPD, ③ that the oscillation period is a decreasing function of the Coulombic impurity potential, whereas it is an increasing one of the polaron radius.
The atmospheric electric global circuit. [thunderstorm activity
NASA Technical Reports Server (NTRS)
Kasemir, H. W.
1979-01-01
The hypothesis that world thunderstorm activity represents the generator for the atmospheric electric current flow in the earth atmosphere between ground and the ionosphere is based on a close correlation between the magnitude and the diurnal variation of the supply current (thunderstorm generator current) and the load current (fair weather air-earth current density integrated over the earth surface). The advantages of using lightning survey satellites to furnish a base for accepting or rejecting the thunderstorm generator hypothesis are discussed.