Sample records for high probability density

  1. The Effect of Incremental Changes in Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…

  2. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  3. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  4. The effect of incremental changes in phonotactic probability and neighborhood density on word learning by preschool children

    PubMed Central

    Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon

    2013-01-01

    Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005

  5. Competition between harvester ants and rodents in the cold desert

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.

    1979-09-30

    Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less

  6. Evaluating detection probabilities for American marten in the Black Hills, South Dakota

    USGS Publications Warehouse

    Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.

    2007-01-01

    Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.

  7. A wave function for stock market returns

    NASA Astrophysics Data System (ADS)

    Ataullah, Ali; Davidson, Ian; Tippett, Mark

    2009-02-01

    The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.

  8. Simple gain probability functions for large reflector antennas of JPL/NASA

    NASA Technical Reports Server (NTRS)

    Jamnejad, V.

    2003-01-01

    Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.

  9. The Effect of Phonotactic Probability and Neighbourhood Density on Pseudoword Learning in 6- and 7-Year-Old Children

    ERIC Educational Resources Information Center

    van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.

    2016-01-01

    The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…

  10. Car accidents induced by a bottleneck

    NASA Astrophysics Data System (ADS)

    Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid

    2017-12-01

    Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.

  11. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Technical Reports Server (NTRS)

    Kastner, S. O.; Bhatia, A. K.

    1980-01-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  12. Use of generalized population ratios to obtain Fe XV line intensities and linewidths at high electron densities

    NASA Astrophysics Data System (ADS)

    Kastner, S. O.; Bhatia, A. K.

    1980-08-01

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.

  13. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  14. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions

    PubMed Central

    Storkel, Holly L.; Lee, Jaehoon; Cox, Casey

    2016-01-01

    Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276

  15. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions.

    PubMed

    Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey

    2016-11-01

    Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.

  16. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Horn, J. E.; Harter, T.

    2011-06-01

    Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.

  17. Assessing hypotheses about nesting site occupancy dynamics

    USGS Publications Warehouse

    Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle

    2011-01-01

    Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.

  18. Domestic wells have high probability of pumping septic tank leachate

    NASA Astrophysics Data System (ADS)

    Bremer, J. E.; Harter, T.

    2012-08-01

    Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).

  19. Predictions of malaria vector distribution in Belize based on multispectral satellite data.

    PubMed

    Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J

    1996-03-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  20. Predictions of malaria vector distribution in Belize based on multispectral satellite data

    NASA Technical Reports Server (NTRS)

    Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.

    1996-01-01

    Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.

  1. Improving effectiveness of systematic conservation planning with density data.

    PubMed

    Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant

    2015-08-01

    Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.

  2. Approved Methods and Algorithms for DoD Risk-Based Explosives Siting

    DTIC Science & Technology

    2007-02-02

    glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kastner, S.O.; Bhatia, A.K.

    A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less

  4. Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2005-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163

  5. Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination

    PubMed Central

    Sinkkonen, Aki

    2006-01-01

    A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596

  6. Simulation Of Wave Function And Probability Density Of Modified Poschl Teller Potential Derived Using Supersymmetric Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Angraini, Lily Maysari; Suparmi, Variani, Viska Inda

    2010-12-01

    SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.

  7. Information Density and Syntactic Repetition.

    PubMed

    Temperley, David; Gildea, Daniel

    2015-11-01

    In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.

  8. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.

    2011-01-01

    Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.

  9. Parasite transmission in social interacting hosts: Monogenean epidemics in guppies

    USGS Publications Warehouse

    Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne

    2011-01-01

    Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.

  10. Causal illusions in children when the outcome is frequent

    PubMed Central

    2017-01-01

    Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent. PMID:28898294

  11. Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words

    PubMed Central

    Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.

    2012-01-01

    Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774

  12. Fractional Brownian motion with a reflecting wall

    NASA Astrophysics Data System (ADS)

    Wada, Alexander H. O.; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior ˜tα , the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α >1 , the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α <1 , in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  13. Attenuated associations between increasing BMI and unfavorable lipid profiles in Chinese Buddhist vegetarians.

    PubMed

    Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun

    2013-01-01

    Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.

  14. Can we estimate molluscan abundance and biomass on the continental shelf?

    NASA Astrophysics Data System (ADS)

    Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.

    2017-11-01

    Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.

  15. Self-Organization in 2D Traffic Flow Model with Jam-Avoiding Drive

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1995-04-01

    A stochastic cellular automaton (CA) model is presented to investigate the traffic jam by self-organization in the two-dimensional (2D) traffic flow. The CA model is the extended version of the 2D asymmetric exclusion model to take into account jam-avoiding drive. Each site contains either a car moving to the up, a car moving to the right, or is empty. A up car can shift right with probability p ja if it is blocked ahead by other cars. It is shown that the three phases (the low-density phase, the intermediate-density phase and the high-density phase) appear in the traffic flow. The intermediate-density phase is characterized by the right moving of up cars. The jamming transition to the high-density jamming phase occurs with higher density of cars than that without jam-avoiding drive. The jamming transition point p 2c increases with the shifting probability p ja. In the deterministic limit of p ja=1, it is found that a new jamming transition occurs from the low-density synchronized-shifting phase to the high-density moving phase with increasing density of cars. In the synchronized-shifting phase, all up cars do not move to the up but shift to the right by synchronizing with the move of right cars. We show that the jam-avoiding drive has an important effect on the dynamical jamming transition.

  16. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  17. Understanding environmental DNA detection probabilities: A case study using a stream-dwelling char Salvelinus fontinalis

    USGS Publications Warehouse

    Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.

    2016-01-01

    Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.

  18. Influence of thermal agitation on the electric field induced precessional magnetization reversal with perpendicular easy axis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Hongguang, E-mail: chenghg7932@gmail.com; Deng, Ning

    2013-12-15

    We investigated the influence of thermal agitation on the electric field induced precessional magnetization switching probability with perpendicular easy axis by solving the Fokker-Planck equation numerically with finite difference method. The calculated results show that the thermal agitation during the reversal process crucially influences the switching probability. The switching probability can be achieved is only determined by the thermal stability factor Δ of the free layer, it is independent on the device dimension, which is important for the high density device application. Ultra-low error rate down to the order of 10{sup −9} can be achieved for the device of thermalmore » stability factor Δ of 40. Low damping factor α material should be used for the free layer for high reliability device applications. These results exhibit potential of electric field induced precessional magnetization switching with perpendicular easy axis for ultra-low power, high speed and high density magnetic random access memory (MRAM) applications.« less

  19. Fractional Brownian motion with a reflecting wall.

    PubMed

    Wada, Alexander H O; Vojta, Thomas

    2018-02-01

    Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.

  20. Novel density-based and hierarchical density-based clustering algorithms for uncertain data.

    PubMed

    Zhang, Xianchao; Liu, Han; Zhang, Xiaotong

    2017-09-01

    Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  2. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  3. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  4. Density probability distribution functions of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2008-10-01

    In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of at high |b| is twice as wide as that at low |b|. The width of the PDF of the DIG is about 30 per cent smaller than that of the warm HI at the same latitudes. The results reported here provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  5. Does probability of occurrence relate to population dynamics?

    USGS Publications Warehouse

    Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.

    2014-01-01

    Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence probability are those with high densities but slow intrinsic population growth rates. The uncertain relationships between demography and occurrence probability suggests caution when linking species distribution and demographic models.

  6. Statistics of cosmic density profiles from perturbation theory

    NASA Astrophysics Data System (ADS)

    Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine

    2014-11-01

    The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.

  7. Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?

    USGS Publications Warehouse

    Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.

    2005-01-01

    In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.

  8. Neighbor-Dependent Ramachandran Probability Distributions of Amino Acids Developed from a Hierarchical Dirichlet Process Model

    PubMed Central

    Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.

    2010-01-01

    Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867

  9. Epidemics in interconnected small-world networks.

    PubMed

    Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong

    2015-01-01

    Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.

  10. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  11. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  12. A spatially explicit model for an Allee effect: why wolves recolonize so slowly in Greater Yellowstone.

    PubMed

    Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A

    2006-11-01

    A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.

  13. Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.

    2017-03-01

    Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.

  14. Multi-species genetic connectivity in a terrestrial habitat network.

    PubMed

    Marrotte, Robby R; Bowman, Jeff; Brown, Michael G C; Cordes, Chad; Morris, Kimberley Y; Prentice, Melanie B; Wilson, Paul J

    2017-01-01

    Habitat fragmentation reduces genetic connectivity for multiple species, yet conservation efforts tend to rely heavily on single-species connectivity estimates to inform land-use planning. Such conservation activities may benefit from multi-species connectivity estimates, which provide a simple and practical means to mitigate the effects of habitat fragmentation for a larger number of species. To test the validity of a multi-species connectivity model, we used neutral microsatellite genetic datasets of Canada lynx ( Lynx canadensis ), American marten ( Martes americana ), fisher ( Pekania pennanti ), and southern flying squirrel ( Glaucomys volans ) to evaluate multi-species genetic connectivity across Ontario, Canada. We used linear models to compare node-based estimates of genetic connectivity for each species to point-based estimates of landscape connectivity (current density) derived from circuit theory. To our knowledge, we are the first to evaluate current density as a measure of genetic connectivity. Our results depended on landscape context: habitat amount was more important than current density in explaining multi-species genetic connectivity in the northern part of our study area, where habitat was abundant and fragmentation was low. In the south however, where fragmentation was prevalent, genetic connectivity was correlated with current density. Contrary to our expectations however, locations with a high probability of movement as reflected by high current density were negatively associated with gene flow. Subsequent analyses of circuit theory outputs showed that high current density was also associated with high effective resistance, underscoring that the presence of pinch points is not necessarily indicative of gene flow. Overall, our study appears to provide support for the hypothesis that landscape pattern is important when habitat amount is low. We also conclude that while current density is proportional to the probability of movement per unit area, this does not imply increased gene flow, since high current density tends to be a result of neighbouring pixels with high cost of movement (e.g., low habitat amount). In other words, pinch points with high current density appear to constrict gene flow.

  15. On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.

  16. High-Density Signal Interface Electromagnetic Radiation Prediction for Electromagnetic Compatibility Evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halligan, Matthew

    Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less

  17. Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target

    PubMed Central

    Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji

    2009-01-01

    In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326

  18. Microdosimetric Analysis Confirms Similar Biological Effectiveness of External Exposure to Gamma-Rays and Internal Exposure to 137Cs, 134Cs, and 131I

    PubMed Central

    Sato, Tatsuhiko; Manabe, Kentaro; Hamada, Nobuyuki

    2014-01-01

    The risk of internal exposure to 137Cs, 134Cs, and 131I is of great public concern after the accident at the Fukushima-Daiichi nuclear power plant. The relative biological effectiveness (RBE, defined herein as effectiveness of internal exposure relative to the external exposure to γ-rays) is occasionally believed to be much greater than unity due to insufficient discussions on the difference of their microdosimetric profiles. We therefore performed a Monte Carlo particle transport simulation in ideally aligned cell systems to calculate the probability densities of absorbed doses in subcellular and intranuclear scales for internal exposures to electrons emitted from 137Cs, 134Cs, and 131I, as well as the external exposure to 662 keV photons. The RBE due to the inhomogeneous radioactive isotope (RI) distribution in subcellular structures and the high ionization density around the particle trajectories was then derived from the calculated microdosimetric probability density. The RBE for the bystander effect was also estimated from the probability density, considering its non-linear dose response. The RBE due to the high ionization density and that for the bystander effect were very close to 1, because the microdosimetric probability densities were nearly identical between the internal exposures and the external exposure from the 662 keV photons. On the other hand, the RBE due to the RI inhomogeneity largely depended on the intranuclear RI concentration and cell size, but their maximum possible RBE was only 1.04 even under conservative assumptions. Thus, it can be concluded from the microdosimetric viewpoint that the risk from internal exposures to 137Cs, 134Cs, and 131I should be nearly equivalent to that of external exposure to γ-rays at the same absorbed dose level, as suggested in the current recommendations of the International Commission on Radiological Protection. PMID:24919099

  19. Optimize out-of-core thermionic energy conversion for nuclear electric propulsion

    NASA Technical Reports Server (NTRS)

    Morris, J. F.

    1977-01-01

    Current designs for out of core thermionic energy conversion (TEC) to power nuclear electric propulsion (NEP) were evaluated. Approaches to improve out of core TEC are emphasized and probabilities for success are indicated. TEC gains are available with higher emitter temperatures and greater power densities. Good potentialities for accommodating external high temperature, high power density TEC with heat pipe cooled reactors exist.

  20. Automated side-chain model building and sequence assignment by template matching.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.

  1. Force Density Function Relationships in 2-D Granular Media

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.

    2004-01-01

    An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms

  2. Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation

    USGS Publications Warehouse

    Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.

    1998-01-01

    We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.

  3. Preparation and Characterization of Ato Nanoparticles by Coprecipitation with Modified Drying Method

    NASA Astrophysics Data System (ADS)

    Liu, Shimin; Liang, Dongdong; Liu, Jindong; Jiang, Weiwei; Liu, Chaoqian; Ding, Wanyu; Wang, Hualin; Wang, Nan

    Antimony-doped tin oxide (ATO) nanoparticles were prepared by coprecipitation by packing drying and traditional direct drying (for comparison) methods. The as-prepared ATO nanoparticles were characterized by TG, XRD, EDS, TEM, HRTEM, BET, bulk density and electrical resistivity measurements. Results indicated that the ATO nanoparticles obtained by coprecipitation with direct drying method featured hard-agglomerated morphology, high bulk density, low surface area and low electrical resistivity, probably due to the direct liquid evaporation during drying, the fast shrinkage of the precipitate, the poor removal efficiency of liquid molecules and the hard agglomerate formation after calcination. Very differently, the ATO product obtained by the packing and drying method featured free-agglomerated morphology, low bulk density, high surface area and high electrical resistivity ascribed probably to the formed vapor cyclone environment and liquid evaporation-resistance, avoiding fast liquid removal and improving the removal efficiency of liquid molecules. The intrinsic formation mechanism of ATO nanoparticles from different drying methods was illustrated based on the dehydration process of ATO precipitates. Additionally, the packing and drying time played key roles in determining the bulk density, morphology and electrical conductivity of ATO nanoparticles.

  4. Spectroscopy and atomic physics of highly ionized Cr, Fe, and Ni for tokamak plasmas

    NASA Technical Reports Server (NTRS)

    Feldman, U.; Doschek, G. A.; Cheng, C.-C.; Bhatia, A. K.

    1980-01-01

    The paper considers the spectroscopy and atomic physics for some highly ionized Cr, Fe, and Ni ions produced in tokamak plasmas. Forbidden and intersystem wavelengths for Cr and Ni ions are extrapolated and interpolated using the known wavelengths for Fe lines identified in solar-flare plasmas. Tables of transition probabilities for the B I, C I, N I, O I, and F I isoelectronic sequences are presented, and collision strengths and transition probabilities for Cr, Fe, and Ni ions of the Be I sequence are given. Similarities of tokamak and solar spectra are discussed, and it is shown how the atomic data presented may be used to determine ion abundances and electron densities in low-density plasmas.

  5. Hydrogen and Sulfur from Hydrogen Sulfide. 5. Anodic Oxidation of Sulfur on Activated Glassy Carbon

    DTIC Science & Technology

    1988-12-05

    electrolyses of H S can probably be carried out at high rates with modest cell voltages in the range 1-1.5 V. The variation in anode current densities...of H2S from solutions of NaSH in aqueous NaOH was achieved using suitably ac- tivated glassy carbon anodes. Thus electrolyses of H2S can probably be...passivation by using a basic solvent at 850C. Using an H2S-saturated 6M NaOH solution, they conducted electrolyses for extended periods at current densities

  6. Statistical tests for whether a given set of independent, identically distributed draws comes from a specified probability density.

    PubMed

    Tygert, Mark

    2010-09-21

    We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).

  7. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  8. Stochastic transport models for mixing in variable-density turbulence

    NASA Astrophysics Data System (ADS)

    Bakosi, J.; Ristorcelli, J. R.

    2011-11-01

    In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.

  9. Series approximation to probability densities

    NASA Astrophysics Data System (ADS)

    Cohen, L.

    2018-04-01

    One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.

  10. Impact of stone density on outcomes in percutaneous nephrolithotomy (PCNL): an analysis of the clinical research office of the endourological society (CROES) pcnl global study database.

    PubMed

    Anastasiadis, Anastasios; Onal, Bulent; Modi, Pranjal; Turna, Burak; Duvdevani, Mordechai; Timoney, Anthony; Wolf, J Stuart; De La Rosette, Jean

    2013-12-01

    This study aimed to explore the relationship between stone density and outcomes of percutaneous nephrolithotomy (PCNL) using the Clinical Research Office of the Endourological Society (CROES) PCNL Global Study database. Patients undergoing PCNL treatment were assigned to a low stone density [LSD, ≤ 1000 Hounsfield units (HU)] or high stone density (HSD, > 1000 HU) group based on the radiological density of the primary renal stone. Preoperative characteristics and outcomes were compared in the two groups. Retreatment for residual stones was more frequent in the LSD group. The overall stone-free rate achieved was higher in the HSD group (79.3% vs 74.8%, p = 0.113). By univariate regression analysis, the probability of achieving a stone-free outcome peaked at approximately 1250 HU. Below or above this density resulted in lower treatment success, particularly at very low HU values. With increasing radiological stone density, operating time decreased to a minimum at approximately 1000 HU, then increased with further increase in stone density. Multivariate non-linear regression analysis showed a similar relationship between the probability of a stone-free outcome and stone density. Higher treatment success rates were found with low stone burden, pelvic stone location and use of pneumatic lithotripsy. Very low and high stone densities are associated with lower rates of treatment success and longer operating time in PCNL. Preoperative assessment of stone density may help in the selection of treatment modality for patients with renal stones.

  11. On the probability distribution function of the mass surface density of molecular clouds. I

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-05-01

    The probability distribution function (PDF) of the mass surface density is an essential characteristic of the structure of molecular clouds or the interstellar medium in general. Observations of the PDF of molecular clouds indicate a composition of a broad distribution around the maximum and a decreasing tail at high mass surface densities. The first component is attributed to the random distribution of gas which is modeled using a log-normal function while the second component is attributed to condensed structures modeled using a simple power-law. The aim of this paper is to provide an analytical model of the PDF of condensed structures which can be used by observers to extract information about the condensations. The condensed structures are considered to be either spheres or cylinders with a truncated radial density profile at cloud radius rcl. The assumed profile is of the form ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 for arbitrary power n where ρc and r0 are the central density and the inner radius, respectively. An implicit function is obtained which either truncates (sphere) or has a pole (cylinder) at maximal mass surface density. The PDF of spherical condensations and the asymptotic PDF of cylinders in the limit of infinite overdensity ρc/ρ(rcl) flattens for steeper density profiles and has a power law asymptote at low and high mass surface densities and a well defined maximum. The power index of the asymptote Σ- γ of the logarithmic PDF (ΣP(Σ)) in the limit of high mass surface densities is given by γ = (n + 1)/(n - 1) - 1 (spheres) or by γ = n/ (n - 1) - 1 (cylinders in the limit of infinite overdensity). Appendices are available in electronic form at http://www.aanda.org

  12. Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data.

    PubMed

    Nakamura, Yoshihiro; Hasegawa, Osamu

    2017-01-01

    With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.

  13. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  14. Assessing environmental DNA detection in controlled lentic systems.

    PubMed

    Moyer, Gregory R; Díaz-Ferguson, Edgardo; Hill, Jeffrey E; Shea, Colin

    2014-01-01

    Little consideration has been given to environmental DNA (eDNA) sampling strategies for rare species. The certainty of species detection relies on understanding false positive and false negative error rates. We used artificial ponds together with logistic regression models to assess the detection of African jewelfish eDNA at varying fish densities (0, 0.32, 1.75, and 5.25 fish/m3). Our objectives were to determine the most effective water stratum for eDNA detection, estimate true and false positive eDNA detection rates, and assess the number of water samples necessary to minimize the risk of false negatives. There were 28 eDNA detections in 324, 1-L, water samples collected from four experimental ponds. The best-approximating model indicated that the per-L-sample probability of eDNA detection was 4.86 times more likely for every 2.53 fish/m3 (1 SD) increase in fish density and 1.67 times less likely for every 1.02 C (1 SD) increase in water temperature. The best section of the water column to detect eDNA was the surface and to a lesser extent the bottom. Although no false positives were detected, the estimated likely number of false positives in samples from ponds that contained fish averaged 3.62. At high densities of African jewelfish, 3-5 L of water provided a >95% probability for the presence/absence of its eDNA. Conversely, at moderate and low densities, the number of water samples necessary to achieve a >95% probability of eDNA detection approximated 42-73 and >100 L, respectively. Potential biases associated with incomplete detection of eDNA could be alleviated via formal estimation of eDNA detection probabilities under an occupancy modeling framework; alternatively, the filtration of hundreds of liters of water may be required to achieve a high (e.g., 95%) level of certainty that African jewelfish eDNA will be detected at low densities (i.e., <0.32 fish/m3 or 1.75 g/m3).

  15. Predicting the cosmological constant with the scale-factor cutoff measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Simone, Andrea; Guth, Alan H.; Salem, Michael P.

    2008-09-15

    It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less

  16. Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA

    USGS Publications Warehouse

    Yarra, Allyson N.; Magoulick, Daniel D.

    2018-01-01

    Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.

  17. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  18. Bragg-cell receiver study

    NASA Technical Reports Server (NTRS)

    Wilson, Lonnie A.

    1987-01-01

    Bragg-cell receivers are employed in specialized Electronic Warfare (EW) applications for the measurement of frequency. Bragg-cell receiver characteristics are fully characterized for simple RF emitter signals. This receiver is early in its development cycle when compared to the IFM receiver. Functional mathematical models are derived and presented in this report for the Bragg-cell receiver. Theoretical analysis is presented and digital computer signal processing results are presented for the Bragg-cell receiver. Probability density function analysis are performed for output frequency. Probability density function distributions are observed to depart from assumed distributions for wideband and complex RF signals. This analysis is significant for high resolution and fine grain EW Bragg-cell receiver systems.

  19. Estimating loblolly pine size-density trajectories across a range of planting densities

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2013-01-01

    Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...

  20. Sexual segregation in North American elk: the role of density dependence

    PubMed Central

    Stewart, Kelley M; Walsh, Danielle R; Kie, John G; Dick, Brian L; Bowyer, R Terry

    2015-01-01

    We investigated how density-dependent processes and subsequent variation in nutritional condition of individuals influenced both timing and duration of sexual segregation and selection of resources. During 1999–2001, we experimentally created two population densities of North American elk (Cervus elaphus), a high-density population at 20 elk/km2, and a low-density population at 4 elk/km2 to test hypotheses relative to timing and duration of sexual segregation and variation in selection of resources. We used multi-response permutation procedures to investigate patterns of sexual segregation, and resource selection functions to document differences in selection of resources by individuals in high- and low-density populations during sexual segregation and aggregation. The duration of sexual segregation was 2 months longer in the high-density population and likely was influenced by individuals in poorer nutritional condition, which corresponded with later conception and parturition, than at low density. Males and females in the high-density population overlapped in selection of resources to a greater extent than in the low-density population, probably resulting from density-dependent effects of increased intraspecific competition and lower availability of resources. PMID:25691992

  1. Timescales of isotropic and anisotropic cluster collapse

    NASA Astrophysics Data System (ADS)

    Bartelmann, M.; Ehlers, J.; Schneider, P.

    1993-12-01

    From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.

  2. HIGH STAR FORMATION RATES IN TURBULENT ATOMIC-DOMINATED GAS IN THE INTERACTING GALAXIES IC 2163 AND NGC 2207

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elmegreen, Bruce G.; Kaufman, Michele; Bournaud, Frédéric

    CO observations of the interacting galaxies IC 2163 and NGC 2207 are combined with HI, H α , and 24 μ m observations to study the star formation rate (SFR) surface density as a function of the gas surface density. More than half of the high-SFR regions are HI dominated. When compared to other galaxies, these HI-dominated regions have excess SFRs relative to their molecular gas surface densities but normal SFRs relative to their total gas surface densities. The HI-dominated regions are mostly located in the outer part of NGC 2207 where the HI velocity dispersion is high, 40–50 kmmore » s{sup −1}. We suggest that the star-forming clouds in these regions have envelopes at lower densities than normal, making them predominantly atomic, and cores at higher densities than normal because of the high turbulent Mach numbers. This is consistent with theoretical predictions of a flattening in the density probability distribution function for compressive, high Mach number turbulence.« less

  3. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  4. Traditional cattle vs. introduced deer management in Chaco Serrano woodlands (Argentina): Analysis of environmental sustainability at increasing densities.

    PubMed

    Charro, José Luis; López-Sánchez, Aida; Perea, Ramón

    2018-01-15

    Wild ungulate populations have increased and expanded considerably in many regions, including austral woodlands and forests where deer (Cervus elaphus) have been introduced as an alternative management to traditional cattle grazing. In this study, we compared traditional cattle with introduced deer management at increasing deer densities in the "Chaco Serrano" woodlands of Argentina to assess their ecological sustainability. We used three ecological indicators (abundance of tree regeneration, woody plant diversity and browsing damage) as proxies for environmental sustainability in woody systems. Our results indicate that traditional cattle management, at stocking rates of ∼10 ind km -2 , was the most ecologically sustainable management since it allowed greater tree regeneration abundance, higher richness of woody species and lower browsing damage. Importantly, cattle management and deer management at low densities (10 ind km -2 ) showed no significant differences in species richness and abundance of seedlings, although deer caused greater browsing damage on saplings and juveniles. However, management regimes involving high deer densities (∼35 deer km 2 ) was highly unsustainable in comparison to low (∼10 deer km -2 ) and medium (∼20 deer km -2 ) densities, with 40% probability of unsustainable browsing as opposed to less than 5% probability at low and medium densities. In addition, high deer densities caused a strong reduction in tree regeneration, with a 19-30% reduction in the abundance of seedlings and young trees when compared to low deer densities. These results showed that the effect of increasing deer densities on woody plant conservation was not linear, with high deer densities causing a disproportional deleterious effect on tree regeneration and sustainable browsing. Our results suggest that traditional management at low densities or the use of introduced ungulates (deer breeding areas) at low-medium densities (<20 deer km -2 ) are compatible with woody vegetation conservation. However, further research is needed on plant palatability, animal habitat use (spatial heterogeneity) and species turnover and extinction (comparison to areas of low-null historical browsing) to better estimate environmental sustainability of Neotropical ungulate-dominated woodlands. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  6. Spatial and temporal Brook Trout density dynamics: Implications for conservation, management, and monitoring

    USGS Publications Warehouse

    Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,

    2014-01-01

    Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.

  7. Unsupervised Learning and Pattern Recognition of Biological Data Structures with Density Functional Theory and Machine Learning.

    PubMed

    Chen, Chien-Chang; Juan, Hung-Hui; Tsai, Meng-Yuan; Lu, Henry Horng-Shing

    2018-01-11

    By introducing the methods of machine learning into the density functional theory, we made a detour for the construction of the most probable density function, which can be estimated by learning relevant features from the system of interest. Using the properties of universal functional, the vital core of density functional theory, the most probable cluster numbers and the corresponding cluster boundaries in a studying system can be simultaneously and automatically determined and the plausibility is erected on the Hohenberg-Kohn theorems. For the method validation and pragmatic applications, interdisciplinary problems from physical to biological systems were enumerated. The amalgamation of uncharged atomic clusters validated the unsupervised searching process of the cluster numbers and the corresponding cluster boundaries were exhibited likewise. High accurate clustering results of the Fisher's iris dataset showed the feasibility and the flexibility of the proposed scheme. Brain tumor detections from low-dimensional magnetic resonance imaging datasets and segmentations of high-dimensional neural network imageries in the Brainbow system were also used to inspect the method practicality. The experimental results exhibit the successful connection between the physical theory and the machine learning methods and will benefit the clinical diagnoses.

  8. The Influence of Part-Word Phonotactic Probability/Neighborhood Density on Word Learning by Preschool Children Varying in Expressive Vocabulary

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Hoover, Jill R.

    2011-01-01

    The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…

  9. Global Distribution of Density Irregularities in the Equatorial Ionosphere

    NASA Technical Reports Server (NTRS)

    Kil, Hyosub; Heelis, R. A.

    1998-01-01

    We analyzed measurements of ion number density made by the retarding potential analyzer aboard the Atmosphere Explorer-E (AE-E) satellite, which was in an approximately circular orbit at an altitude near 300 km in 1977 and later at an altitude near 400 km. Large-scale (greater than 60 km) density measurements in the high-altitude regions show large depletions of bubble-like structures which are confined to narrow local time longitude, and magnetic latitude ranges, while those in the low-altitude regions show relatively small depletions which are broadly distributed,in space. For this reason we considered the altitude regions below 300 km and above 350 km and investigated the global distribution of irregularities using the rms deviation delta N/N over a path length of 18 km as an indicator of overall irregularity intensity. Seasonal variations of irregularity occurrence probability are significant in the Pacific regions, while the occurrence probability is always high in die Atlantic-African regions and is always low in die Indian regions. We find that the high occurrence probability in the Pacific regions is associated with isolated bubble structures, while that near 0 deg longitude is produced by large depictions with bubble structures which are superimposed on a large-scale wave-like background. Considerations of longitude variations due to seeding mechanisms and due to F region winds and drifts are necessary to adequately explain the observations at low and high altitudes. Seeding effects are most obvious near 0 deg longitude, while the most easily observed effect of the F region is the suppression of irregularity growth by interhemispheric neutral winds.

  10. Inferring probabilistic stellar rotation periods using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh

    2018-02-01

    Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.

  11. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  12. Distribution, density, and biomass of introduced small mammals in the southern mariana islands

    USGS Publications Warehouse

    Wiewel, A.S.; Adams, A.A.Y.; Rodda, G.H.

    2009-01-01

    Although it is generally accepted that introduced small mammals have detrimental effects on island ecology, our understanding of these effects is frequently limited by incomplete knowledge of small mammal distribution, density, and biomass. Such information is especially critical in the Mariana Islands, where small mammal density is inversely related to effectiveness of Brown Tree Snake (Boiga irregularis) control tools, such as mouse-attractant traps. We used mark-recapture sampling to determine introduced small mammal distribution, density, and biomass in the major habitats of Guam, Rota, Saipan, and Tinian, including grassland, Leucaena forest, and native limestone forest. Of the five species captured, Rattus diardii (sensu Robins et al. 2007) was most common across habitats and islands. In contrast, Mus musculus was rarely captured at forested sites, Suncus murinus was not captured on Rota, and R. exulans and R. norvegicus captures were uncommon. Modeling indicated that neophobia, island, sex, reproductive status, and rain amount influenced R. diardii capture probability, whereas time, island, and capture heterogeneity influenced S. murinus and M. musculus capture probability. Density and biomass were much greater on Rota, Saipan, and Tinian than on Guam, most likely a result of Brown Tree Snake predation pressure on the latter island. Rattus diardii and M. musculus density and biomass were greatest in grassland, whereas S. murinus density and biomass were greatest in Leucaena forest. The high densities documented during this research suggest that introduced small mammals (especially R. diardii) are impacting abundance and diversity of the native fauna and flora of the Mariana Islands. Further, Brown Tree Snake control and management tools that rely on mouse attractants will be less effective on Rota, Saipan, and Tinian than on Guam. If the Brown Tree Snake becomes established on these islands, high-density introduced small mammal populations will likely facilitate and support a high-density Brown Tree Snake population, even as native species are reduced or extirpated. ?? 2009 by University of Hawai'i Press All rights reserved.

  13. A Continuous Method for Gene Flow

    PubMed Central

    Palczewski, Michal; Beerli, Peter

    2013-01-01

    Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937

  14. High Discharge Energy Density at Low Electric Field Using an Aligned Titanium Dioxide/Lead Zirconate Titanate Nanowire Array.

    PubMed

    Zhang, Dou; Liu, Weiwei; Guo, Ru; Zhou, Kechao; Luo, Hang

    2018-02-01

    Polymer-based capacitors with high energy density have attracted significant attention in recent years due to their wide range of potential applications in electronic devices. However, the obtained high energy density is predominantly dependent on high applied electric field, e.g., 400-600 kV mm -1 , which may bring more challenges relating to the failure probability. Here, a simple two-step method for synthesizing titanium dioxide/lead zirconate titanate nanowire arrays is exploited and a demonstration of their ability to achieve high discharge energy density capacitors for low operating voltage applications is provided. A high discharge energy density of 6.9 J cm -3 is achieved at low electric fields, i.e., 143 kV mm -1 , which is attributed to the high relative permittivity of 218.9 at 1 kHz and high polarization of 23.35 µC cm -2 at this electric field. The discharge energy density obtained in this work is the highest known for a ceramic/polymer nanocomposite at such a low electric field. The novel nanowire arrays used in this work are applicable to a wide range of fields, such as energy harvesting, energy storage, and photocatalysis.

  15. Design of a High Intensity Turbulent Combustion System

    DTIC Science & Technology

    2015-05-01

    nth repetition of a turbulent-flow experiment. [1] .................... 8 Figure 2. 3: Velocity measurement on the n th repetition of a turbulent-flow...measurement on the n th repetition of a turbulent-flow experiment. u(t) = U + u’(t...event such as P ≈ [ U < N ms-1 ]. The random variable U can be characterized by its probability density function (PDF). The probability of an event

  16. Probability function of breaking-limited surface elevation. [wind generated waves of ocean

    NASA Technical Reports Server (NTRS)

    Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.

    1989-01-01

    The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.

  17. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  18. Use of uninformative priors to initialize state estimation for dynamical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.

    2017-10-01

    The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.

  19. Population density shapes patterns of survival and reproduction in Eleutheria dichotoma (Hydrozoa: Anthoathecata).

    PubMed

    Dańko, Aleksandra; Schaible, Ralf; Pijanowska, Joanna; Dańko, Maciej J

    2018-01-01

    Budding hydromedusae have high reproductive rates due to asexual reproduction and can occur in high population densities along the coasts, specifically in tidal pools. In laboratory experiments, we investigated the effects of population density on the survival and reproductive strategies of a single clone of Eleutheria dichotoma . We found that sexual reproduction occurs with the highest rate at medium population densities. Increased sexual reproduction was associated with lower budding (asexual reproduction) and survival probability. Sexual reproduction results in the production of motile larvae that can, in contrast to medusae, seek to escape unfavorable conditions by actively looking for better environments. The successful settlement of a larva results in starting the polyp stage, which is probably more resistant to environmental conditions. This is the first study that has examined the life-history strategies of the budding hydromedusa E. dichotoma by conducting a long-term experiment with a relatively large sample size that allowed for the examination of age-specific mortality and reproductive rates. We found that most sexual and asexual reproduction occurred at the beginning of life following a very rapid process of maturation. The parametric models fitted to the mortality data showed that population density was associated with an increase in the rate of aging, an increase in the level of late-life mortality plateau, and a decrease in the hidden heterogeneity in individual mortality rates. The effects of population density on life-history traits are discussed in the context of resource allocation and the r/K-strategies' continuum concept.

  20. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  1. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  2. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  3. Influence of land use and climate on Salmonella carrier status in the small Indian mongoose (Herpestes auropunctatus) in Grenada, West Indies.

    PubMed

    Miller, Steven; Zieger, Ulrike; Ganser, Claudia; Satterlee, S Andrew; Bankovich, Brittany; Amadi, Victor; Hariharan, Harry; Stone, Diana; Wisely, Samantha M

    2015-01-01

    Invasive mammals can be important reservoirs for human pathogens. A recent study showed that 12% of mongooses carried Salmonella spp. in their large intestines. We investigated whether anthropogenic, environmental and climatic variables predicted Salmonella status in mongooses (Herpestes auropunctatus) in Grenada. Using multivariate logistic regression and contingency table analysis, we found that increased human density, decreased distance from roads, and low monthly precipitation were associated with increased probability of Salmonella carriage. Areas with higher human density likely support a higher abundance of mongooses because of greater food availability. These areas also are a likely source for infection to mongooses due to high densities of livestock and rodents shedding Salmonella. The higher probability of Salmonella carriage in mongooses during drier months and closer to roadsides is likely due to water drainage patterns and limited water availability. Although the overall prevalence of Salmonella in mongooses was moderate, the strong patterns of ecologic correlates, combined with the high density of mongooses throughout Grenada suggest that the small Indian mongoose could be a useful sentinel for Salmonella surveillance. Its affinity for human-associated habitats suggests that the small Indian mongoose is also a risk factor in the maintenance and possible spread of Salmonella species to humans and livestock in Grenada.

  4. Effects of Shoreline Dynamics on Saltmarsh Vegetation

    PubMed Central

    Sharma, Shailesh; Goff, Joshua; Moody, Ryan M.; McDonald, Ashley; Byron, Dorothy; Heck, Kenneth L.; Powers, Sean P.; Ferraro, Carl; Cebrian, Just

    2016-01-01

    We evaluated the impact of shoreline dynamics on fringing vegetation density at mid- and low-marsh elevations at a high-energy site in the northern Gulf of Mexico. Particularly, we selected eight unprotected shoreline stretches (75 m each) at a historically eroding site and measured their inter-annual lateral movement rate using the DSAS method for three consecutive years. We observed high inter-annual variability of shoreline movement within the selected stretches. Specifically, shorelines retrograded (eroded) in year 1 and year 3, whereas, in year 2, shorelines advanced seaward. Despite shoreline advancement in year 2, an overall net erosion was recorded during the survey period. Additionally, vegetation density generally declined at both elevations during the survey period; however, probably due to their immediate proximity with lateral erosion agents (e.g., waves, currents), marsh grasses at low-elevation exhibited abrupt reduction in density, more so than grasses at mid elevation. Finally, contrary to our hypothesis, despite shoreline advancement, vegetation density did not increase correspondingly in year 2 probably due to a lag in response from biota. More studies in other coastal systems may advance our knowledge of marsh edge systems; however, we consider our results could be beneficial to resource managers in preparing protection plans for coastal wetlands against chronic stressors such as lateral erosion. PMID:27442515

  5. Effects of Shoreline Dynamics on Saltmarsh Vegetation.

    PubMed

    Sharma, Shailesh; Goff, Joshua; Moody, Ryan M; McDonald, Ashley; Byron, Dorothy; Heck, Kenneth L; Powers, Sean P; Ferraro, Carl; Cebrian, Just

    2016-01-01

    We evaluated the impact of shoreline dynamics on fringing vegetation density at mid- and low-marsh elevations at a high-energy site in the northern Gulf of Mexico. Particularly, we selected eight unprotected shoreline stretches (75 m each) at a historically eroding site and measured their inter-annual lateral movement rate using the DSAS method for three consecutive years. We observed high inter-annual variability of shoreline movement within the selected stretches. Specifically, shorelines retrograded (eroded) in year 1 and year 3, whereas, in year 2, shorelines advanced seaward. Despite shoreline advancement in year 2, an overall net erosion was recorded during the survey period. Additionally, vegetation density generally declined at both elevations during the survey period; however, probably due to their immediate proximity with lateral erosion agents (e.g., waves, currents), marsh grasses at low-elevation exhibited abrupt reduction in density, more so than grasses at mid elevation. Finally, contrary to our hypothesis, despite shoreline advancement, vegetation density did not increase correspondingly in year 2 probably due to a lag in response from biota. More studies in other coastal systems may advance our knowledge of marsh edge systems; however, we consider our results could be beneficial to resource managers in preparing protection plans for coastal wetlands against chronic stressors such as lateral erosion.

  6. Effects of a mixture of chloromethylisothiazolinone and methylisothiazolinone on peripheral airway dysfunction in children

    PubMed Central

    Cho, Hyun-Ju; Park, Dong-Uk; Yoon, Jisun; Lee, Eun; Yang, Song-I; Kim, Young-Ho; Lee, So-Yeon

    2017-01-01

    Background Children who were only exposed to a mixture of chloromethylisothiazolinone (CMIT) and methylisothiazolinone (MIT) as humidifier disinfectant (HD) components were evaluated for humidifier disinfectant-associated lung injury (HDLI) from 2012. This study was to evaluate the pulmonary function using, impulse oscillometry (IOS) for children exposed to a mixture of CMIT/MIT from HD. Methods Twenty-four children who were only exposed to a mixture of CMIT/MIT, with no previous underlying disease, were assessed by IOS. Diagnostic criteria for HDLI were categorized as definite, probable, possible, or unlikely. Home visits and administration of a standardized questionnaire were arranged to assess exposure characteristics. Results Definite and probable cases showed higher airborne disinfectant exposure intensity during sleep (32.4 ± 8.7 μg/m3) and younger age at initial exposure (3.5 ± 3.3 months) compared with unlikely cases (17.3 ± 11.0 μg/m3, p = 0.026; 22.5 ± 26.2 months, p = 0.039, respectively). Reactance at 5 Hz was significantly more negative in those with high-density exposure during sleep (mean, -0.463 kPa/L/s vs. low density, -0.296, p = 0.001). The reactance area was also higher with high-density exposure during sleep (mean, 3.240 kPa/L vs. low density, 1.922, p = 0.039). The mean bronchodilator response with high-density exposure was within the normal range for reactance. Conclusions Significant peripheral airway dysfunction were found in children with high levels of inhalation exposure to a mixture of CMIT/MIT during sleep. Strict regulation of a mixture of CMIT/MIT exposure were associated with positive effects on lung function of children. PMID:28453578

  7. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  8. Estimating large carnivore populations at global scale based on spatial predictions of density and distribution – Application to the jaguar (Panthera onca)

    PubMed Central

    Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard

    2018-01-01

    Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129

  9. Nonstationary envelope process and first excursion probability.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.

  10. Environmental dependence of the galaxy stellar mass function in the Dark Energy Survey Science Verification Data

    DOE PAGES

    Etherington, J.; Thomas, D.; Maraston, C.; ...

    2016-01-04

    Measurements of the galaxy stellar mass function are crucial to understand the formation of galaxies in the Universe. In a hierarchical clustering paradigm it is plausible that there is a connection between the properties of galaxies and their environments. Evidence for environmental trends has been established in the local Universe. The Dark Energy Survey (DES) provides large photometric datasets that enable further investigation of the assembly of mass. In this study we use ~3.2 million galaxies from the (South Pole Telescope) SPT-East field in the DES science verification (SV) dataset. From grizY photometry we derive galaxy stellar masses and absolutemore » magnitudes, and determine the errors on these properties using Monte-Carlo simulations using the full photometric redshift probability distributions. We compute galaxy environments using a fixed conical aperture for a range of scales. We construct galaxy environment probability distribution functions and investigate the dependence of the environment errors on the aperture parameters. We compute the environment components of the galaxy stellar mass function for the redshift range 0.15 < z < 1.05. For z < 0.75 we find that the fraction of massive galaxies is larger in high density environment than in low density environments. We show that the low density and high density components converge with increasing redshift up to z ~ 1.0 where the shapes of the mass function components are indistinguishable. As a result, our study shows how high density structures build up around massive galaxies through cosmic time.« less

  11. High Discharge Energy Density at Low Electric Field Using an Aligned Titanium Dioxide/Lead Zirconate Titanate Nanowire Array

    PubMed Central

    Zhang, Dou; Liu, Weiwei; Guo, Ru; Zhou, Kechao

    2017-01-01

    Abstract Polymer‐based capacitors with high energy density have attracted significant attention in recent years due to their wide range of potential applications in electronic devices. However, the obtained high energy density is predominantly dependent on high applied electric field, e.g., 400–600 kV mm−1, which may bring more challenges relating to the failure probability. Here, a simple two‐step method for synthesizing titanium dioxide/lead zirconate titanate nanowire arrays is exploited and a demonstration of their ability to achieve high discharge energy density capacitors for low operating voltage applications is provided. A high discharge energy density of 6.9 J cm−3 is achieved at low electric fields, i.e., 143 kV mm−1, which is attributed to the high relative permittivity of 218.9 at 1 kHz and high polarization of 23.35 µC cm−2 at this electric field. The discharge energy density obtained in this work is the highest known for a ceramic/polymer nanocomposite at such a low electric field. The novel nanowire arrays used in this work are applicable to a wide range of fields, such as energy harvesting, energy storage, and photocatalysis. PMID:29610724

  12. Gravity evidence for a shallow intrusion under Medicine Lake volcano, California.

    USGS Publications Warehouse

    Finn, C.; Williams, D.L.

    1982-01-01

    A positive gravity anomaly is associated with Medicine Lake volcano, California. Trials with different Bouguer reduction densities indicate that this positive anomaly cannot be explained by an inappropriate choice of Bouguer reduction density but must be caused by a subvolcanic body. After separating the Medicine Lake gravity high from the regional field, we were able to fit the 27mgal positive residual anomaly with a large, shallow body of high density contrast (+0.41g/cm3) and a thickness of 2.5km. We interpret this body to be an intrusion of dense material emplaced within the several-kilometres-thick older volcanic layer that probably underlies Medicine Lake volcano.-Authors

  13. Postfragmentation density function for bacterial aggregates in laminar flow

    PubMed Central

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John

    2014-01-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205

  14. A plasmapause-like density boundary at high latitudes in Saturn's magnetosphere

    NASA Astrophysics Data System (ADS)

    Gurnett, D. A.; Persoon, A. M.; Kopf, A. J.; Kurth, W. S.; Morooka, M. W.; Wahlund, J.-E.; Khurana, K. K.; Dougherty, M. K.; Mitchell, D. G.; Krimigis, S. M.; Krupp, N.

    2010-08-01

    Here we report the discovery of a well-defined plasma density boundary at high latitudes in Saturn's magnetosphere. The boundary separates a region of relatively high density at L less than about 8 to 15 from a region with densities nearly three orders of magnitude lower at higher L values. Magnetic field measurements show that strong field-aligned currents, probably associated with the aurora, are located just inside the boundary. Analyses of the anisotropy of energetic electrons show that the magnetic field lines are usually closed inside the boundary and open outside the boundary, although exceptions sometimes occur. The location of the boundary is also modulated at the ˜10.6 to 10.8 hr rotational period of the planet. Many of these characteristics are similar to those predicted by Brice and Ioannidis for the plasmapause at a strongly magnetized, rapidly rotating planet such as Saturn.

  15. Two-dimensional electron density characterisation of arc interruption phenomenon in current-zero phase

    NASA Astrophysics Data System (ADS)

    Inada, Yuki; Kamiya, Tomoki; Matsuoka, Shigeyasu; Kumada, Akiko; Ikeda, Hisatoshi; Hidaka, Kunihiko

    2018-01-01

    Two-dimensional electron density imaging over free burning SF6 arcs and SF6 gas-blast arcs was conducted at current zero using highly sensitive Shack-Hartmann type laser wavefront sensors in order to experimentally characterise electron density distributions for the success and failure of arc interruption in the thermal reignition phase. The experimental results under an interruption probability of 50% showed that free burning SF6 arcs with axially asymmetric electron density profiles were interrupted with a success rate of 88%. On the other hand, the current interruption of SF6 gas-blast arcs was reproducibly achieved under locally reduced electron densities and the interruption success rate was 100%.

  16. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  17. Density distribution function of a self-gravitating isothermal compressible turbulent fluid in the context of molecular clouds ensembles

    NASA Astrophysics Data System (ADS)

    Donkov, Sava; Stefanov, Ivan Z.

    2018-03-01

    We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.

  18. Balanced design for the feasible super rocket fuels: A first-principle study on gauche CHN7 and CHN3.

    PubMed

    Yu, Tao; Lin, Maohua; Wu, Bo; Wang, Jintian; Tsai, Chi-Tay

    2018-05-16

    On the basis of the framework of cubic gauche nitrogen (cg-N), six one-eighth methanetriyl groups (>CH-) substitutes and fifteen one-fourth >CH- substitutes were optimized using the first-principle calculations based on density functional theory (DFT). Both one-eighth and one-fourth substitutes still keep the gauche structures with the simple formula CHN 7 and CHN 3 , respectively. The most thermodynamic stable gauche CHN 7 and CHN 3 are P2 1 qtg-C 2 H 2 N 14 I and P2 1 qtg-C 4 H 4 N 12 III, respectively. No probability density of C-C single bonds and high probability densities of C-N-C structures were found in the two substitutes. Although gauche CHN 7 and CHN 3 lose energy density in contrast to cg-N, they win kinetic stability and combustion temperature (T c ). Thus, they are more feasible than cg-N, and more effective than the traditional rocket fuels. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Delineating high-density areas in spatial Poisson fields from strip-transect sampling using indicator geostatistics: application to unexploded ordnance removal.

    PubMed

    Saito, Hirotaka; McKenna, Sean A

    2007-07-01

    An approach for delineating high anomaly density areas within a mixture of two or more spatial Poisson fields based on limited sample data collected along strip transects was developed. All sampled anomalies were transformed to anomaly count data and indicator kriging was used to estimate the probability of exceeding a threshold value derived from the cdf of the background homogeneous Poisson field. The threshold value was determined so that the delineation of high-density areas was optimized. Additionally, a low-pass filter was applied to the transect data to enhance such segmentation. Example calculations were completed using a controlled military model site, in which accurate delineation of clusters of unexploded ordnance (UXO) was required for site cleanup.

  20. Challenges of DNA-based mark-recapture studies of American black bears

    USGS Publications Warehouse

    Settlage, K.E.; Van Manen, F.T.; Clark, J.D.; King, T.L.

    2008-01-01

    We explored whether genetic sampling would be feasible to provide a region-wide population estimate for American black bears (Ursus americanus) in the southern Appalachians, USA. Specifically, we determined whether adequate capture probabilities (p >0.20) and population estimates with a low coefficient of variation (CV <20%) could be achieved given typical agency budget and personnel constraints. We extracted DNA from hair collected from baited barbed-wire enclosures sampled over a 10-week period on 2 study areas: a high-density black bear population in a portion of Great Smoky Mountains National Park and a lower density population on National Forest lands in North Carolina, South Carolina, and Georgia. We identified individual bears by their unique genotypes obtained from 9 microsatellite loci. We sampled 129 and 60 different bears in the National Park and National Forest study areas, respectively, and applied closed mark–recapture models to estimate population abundance. Capture probabilities and precision of the population estimates were acceptable only for sampling scenarios for which we pooled weekly sampling periods. We detected capture heterogeneity biases, probably because of inadequate spatial coverage by the hair-trapping grid. The logistical challenges of establishing and checking a sufficiently high density of hair traps make DNA-based estimates of black bears impractical for the southern Appalachian region. Alternatives are to estimate population size for smaller areas, estimate population growth rates or survival using mark–recapture methods, or use independent marking and recapturing techniques to reduce capture heterogeneity.

  1. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Zhang; Chen, Wei

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  2. Generalized skew-symmetric interfacial probability distribution in reflectivity and small-angle scattering analysis

    DOE PAGES

    Jiang, Zhang; Chen, Wei

    2017-11-03

    Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.

  3. Application of Large-Scale Parentage Analysis for Investigating Natal Dispersal in Highly Vagile Vertebrates: A Case Study of American Black Bears (Ursus americanus)

    PubMed Central

    Moore, Jennifer A.; Draheim, Hope M.; Etter, Dwayne; Winterstein, Scott; Scribner, Kim T.

    2014-01-01

    Understanding the factors that affect dispersal is a fundamental question in ecology and conservation biology, particularly as populations are faced with increasing anthropogenic impacts. Here we collected georeferenced genetic samples (n = 2,540) from three generations of black bears (Ursus americanus) harvested in a large (47,739 km2), geographically isolated population and used parentage analysis to identify mother-offspring dyads (n = 337). We quantified the effects of sex, age, habitat type and suitability, and local harvest density at the natal and settlement sites on the probability of natal dispersal, and on dispersal distances. Dispersal was male-biased (76% of males dispersed) but a small proportion (21%) of females also dispersed, and female dispersal distances (mean ± SE  =  48.9±7.7 km) were comparable to male dispersal distances (59.0±3.2 km). Dispersal probabilities and dispersal distances were greatest for bears in areas with high habitat suitability and low harvest density. The inverse relationship between dispersal and harvest density in black bears suggests that 1) intensive harvest promotes restricted dispersal, or 2) high black bear population density decreases the propensity to disperse. Multigenerational genetic data collected over large landscape scales can be a powerful means of characterizing dispersal patterns and causal associations with demographic and landscape features in wild populations of elusive and wide-ranging species. PMID:24621593

  4. Probability and Quantum Paradigms: the Interplay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kracklauer, A. F.

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less

  5. Probability and Quantum Paradigms: the Interplay

    NASA Astrophysics Data System (ADS)

    Kracklauer, A. F.

    2007-12-01

    Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.

  6. Density of American black bears in New Mexico

    USGS Publications Warehouse

    Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.; Liley, Stewart

    2018-01-01

    Considering advances in noninvasive genetic sampling and spatially explicit capture–recapture (SECR) models, the New Mexico Department of Game and Fish sought to update their density estimates for American black bear (Ursus americanus) populations in New Mexico, USA, to aide in setting sustainable harvest limits. We estimated black bear density in the Sangre de Cristo, Sandia, and Sacramento Mountains, New Mexico, 2012–2014. We collected hair samples from black bears using hair traps and bear rubs and used a sex marker and a suite of microsatellite loci to individually genotype hair samples. We then estimated density in a SECR framework using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We sampled the populations using 554 hair traps and 117 bear rubs and collected 4,083 hair samples. We identified 725 (367 male, 358 female) individuals. Our density estimates varied from 16.5 bears/100 km2 (95% CI = 11.6–23.5) in the southern Sacramento Mountains to 25.7 bears/100 km2 (95% CI = 13.2–50.1) in the Sandia Mountains. Overall, detection probability at the activity center (g0) was low across all study areas and ranged from 0.00001 to 0.02. The low values of g0 were primarily a result of half of all hair samples for which genotypes were attempted failing to produce a complete genotype. We speculate that the low success we had genotyping hair samples was due to exceedingly high levels of ultraviolet (UV) radiation that degraded the DNA in the hair. Despite sampling difficulties, we were able to produce density estimates with levels of precision comparable to those estimated for black bears elsewhere in the United States.

  7. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  8. Effects of clay turbidity and density of pikeperch (Sander lucioperca) larvae on predation by perch (Perca fluviatilis).

    PubMed

    Pekcan-Hekim, Zeynep; Lappalainen, Jyrki

    2006-07-01

    Increased turbidity reduces visibility in the water column, which can negatively affect vision-oriented fish and their ability to detect prey. Young fish could consequently benefit from high turbidity levels that can provide a protective cover, reducing predation pressure. Perch (Perca fluviatilis) are commonly found in littoral zones of temperate lakes and coastal areas of the Baltic Sea. Pikeperch (Sander lucioperca) spawn in these areas, so perch is a potential predator for pikeperch larvae. We conducted laboratory experiments to test the predation of perch on pikeperch larvae at different turbidity levels (5-85 nephelometric turbidity units), densities of pikeperch larvae (2-21 individuals l(-1)) and volumes of water (10-45 l). The logistic regression showed that the probability of larvae eaten depended significantly on turbidity and volume of water in the bags, while density of larvae was not significant. However, because container size is known to affect predation, the data was divided into two groups based on water volume (10-20 and 25-45 l) to reduce the effects of container size. In either group, probability of predation did not significantly depend on volume, whereas turbidity was significant in both groups, while density was significant in larger water volumes. Thus, high turbidity impaired perch predation and protected pikeperch larvae from perch predation. Because density of larvae was also a significant factor affecting predation of perch, the dispersal of pikeperch larvae from spawning areas should also increase the survival of larvae.

  9. Analysis of high-resolution foreign exchange data of USD-JPY for 13 years

    NASA Astrophysics Data System (ADS)

    Mizuno, Takayuki; Kurihara, Shoko; Takayasu, Misako; Takayasu, Hideki

    2003-06-01

    We analyze high-resolution foreign exchange data consisting of 20 million data points of USD-JPY for 13 years to report firm statistical laws in distributions and correlations of exchange rate fluctuations. A conditional probability density analysis clearly shows the existence of trend-following movements at time scale of 8-ticks, about 1 min.

  10. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  11. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  12. Mechanisms of graviperception and response in pea seedlings

    NASA Technical Reports Server (NTRS)

    Galston, A. W.

    1984-01-01

    A new method for the mass isolation and purification of multigranular amyloplasts from the bundle sheath parenchyma of etiolated pa epicotyls was presented. These bodies, which displace within 2+3 minutes of exposure to 1 x g, are probably the gravity receptors (statoliths) in this plant. These amyloplasts were characterized as having a doublemembrane with a surface-localized ATPase, a high calcium content, and their own genomic DNA. These amyloplasts are investigated as to (a) the reasons for their especially high density, probable related to their starch content, (b) the possible identity of their DNA with the DNA of chloroplasts and unigranular amyloplasts, and (c) possible importance of their high calcium content.

  13. High population density of black-handed spider monkeys (Ateles geoffroyi) in Costa Rican lowland wet forest.

    PubMed

    Weghorst, Jennifer A

    2007-04-01

    The main objective of this study was to estimate the population density and demographic structure of spider monkeys living in wet forest in the vicinity of Sirena Biological Station, Corcovado National Park, Costa Rica. Results of a 14-month line-transect survey showed that spider monkeys of Sirena have one of the highest population densities ever recorded for this genus. Density estimates varied, however, depending on the method chosen to estimate transect width. Data from behavioral monitoring were available to compare density estimates derived from the survey, providing a check of the survey's accuracy. A combination of factors has most probably contributed to the high density of Ateles, including habitat protection within a national park and high diversity of trees of the fig family, Moraceae. Although natural densities of spider monkeys at Sirena are substantially higher than those recorded at most other sites and in previous studies at this site, mean subgroup size and age ratios were similar to those determined in previous studies. Sex ratios were similar to those of other sites with high productivity. Although high densities of preferred fruit trees in the wet, productive forests of Sirena may support a dense population of spider monkeys, other demographic traits recorded at Sirena fall well within the range of values recorded elsewhere for the species.

  14. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.

    1980-01-01

    Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.

  15. High density bit transition requirements versus the effects on BCH error correcting code. [bit synchronization

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Schoggen, W. O.

    1982-01-01

    The design to achieve the required bit transition density for the Space Shuttle high rate multiplexes (HRM) data stream of the Space Laboratory Vehicle is reviewed. It contained a recommended circuit approach, specified the pseudo random (PN) sequence to be used and detailed the properties of the sequence. Calculations showing the probability of failing to meet the required transition density were included. A computer simulation of the data stream and PN cover sequence was provided. All worst case situations were simulated and the bit transition density exceeded that required. The Preliminary Design Review and the critical Design Review are documented. The Cover Sequence Generator (CSG) Encoder/Decoder design was constructed and demonstrated. The demonstrations were successful. All HRM and HRDM units incorporate the CSG encoder or CSG decoder as appropriate.

  16. Disordered cellular automaton traffic flow model: phase separated state, density waves and self organized criticality

    NASA Astrophysics Data System (ADS)

    Fourrate, K.; Loulidi, M.

    2006-01-01

    We suggest a disordered traffic flow model that captures many features of traffic flow. It is an extension of the Nagel-Schreckenberg (NaSch) stochastic cellular automata for single line vehicular traffic model. It incorporates random acceleration and deceleration terms that may be greater than one unit. Our model leads under its intrinsic dynamics, for high values of braking probability pr, to a constant flow at intermediate densities without introducing any spatial inhomogeneities. For a system of fast drivers pr→0, the model exhibits a density wave behavior that was observed in car following models with optimal velocity. The gap of the disordered model we present exhibits, for high values of pr and random deceleration, at a critical density, a power law distribution which is a hall mark of a self organized criticality phenomena.

  17. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  18. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  19. Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Errington, Jeffrey R.

    2003-06-01

    An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.

  20. Postfragmentation density function for bacterial aggregates in laminar flow.

    PubMed

    Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M

    2011-04-01

    The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society

  1. The Influence of Phonotactic Probability and Neighborhood Density on Children's Production of Newly Learned Words

    ERIC Educational Resources Information Center

    Heisler, Lori; Goffman, Lisa

    2016-01-01

    A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…

  2. Incompressible variable-density turbulence in an external acceleration field

    DOE PAGES

    Gat, Ilana; Matheou, Georgios; Chung, Daniel; ...

    2017-08-24

    Dynamics and mixing of a variable-density turbulent flow subject to an externally imposed acceleration field in the zero-Mach-number limit are studied in a series of direct numerical simulations. The flow configuration studied consists of alternating slabs of high- and low-density fluid in a triply periodic domain. Density ratios in the range ofmore » $$1.05\\leqslant R\\equiv \\unicode[STIX]{x1D70C}_{1}/\\unicode[STIX]{x1D70C}_{2}\\leqslant 10$$are investigated. The flow produces temporally evolving shear layers. A perpendicular density–pressure gradient is maintained in the mean as the flow evolves, with multi-scale baroclinic torques generated in the turbulent flow that ensues. For all density ratios studied, the simulations attain Reynolds numbers at the beginning of the fully developed turbulence regime. An empirical relation for the convection velocity predicts the observed entrainment-ratio and dominant mixed-fluid composition statistics. Two mixing-layer temporal evolution regimes are identified: an initial diffusion-dominated regime with a growth rate$${\\sim}t^{1/2}$$followed by a turbulence-dominated regime with a growth rate$${\\sim}t^{3}$$. In the turbulent regime, composition probability density functions within the shear layers exhibit a slightly tilted (‘non-marching’) hump, corresponding to the most probable mole fraction. In conclusion, the shear layers preferentially entrain low-density fluid by volume at all density ratios, which is reflected in the mixed-fluid composition.« less

  3. Incompressible variable-density turbulence in an external acceleration field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gat, Ilana; Matheou, Georgios; Chung, Daniel

    Dynamics and mixing of a variable-density turbulent flow subject to an externally imposed acceleration field in the zero-Mach-number limit are studied in a series of direct numerical simulations. The flow configuration studied consists of alternating slabs of high- and low-density fluid in a triply periodic domain. Density ratios in the range ofmore » $$1.05\\leqslant R\\equiv \\unicode[STIX]{x1D70C}_{1}/\\unicode[STIX]{x1D70C}_{2}\\leqslant 10$$are investigated. The flow produces temporally evolving shear layers. A perpendicular density–pressure gradient is maintained in the mean as the flow evolves, with multi-scale baroclinic torques generated in the turbulent flow that ensues. For all density ratios studied, the simulations attain Reynolds numbers at the beginning of the fully developed turbulence regime. An empirical relation for the convection velocity predicts the observed entrainment-ratio and dominant mixed-fluid composition statistics. Two mixing-layer temporal evolution regimes are identified: an initial diffusion-dominated regime with a growth rate$${\\sim}t^{1/2}$$followed by a turbulence-dominated regime with a growth rate$${\\sim}t^{3}$$. In the turbulent regime, composition probability density functions within the shear layers exhibit a slightly tilted (‘non-marching’) hump, corresponding to the most probable mole fraction. In conclusion, the shear layers preferentially entrain low-density fluid by volume at all density ratios, which is reflected in the mixed-fluid composition.« less

  4. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  5. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  6. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  7. Intraspecific variation and species coexistence.

    PubMed

    Lichstein, Jeremy W; Dushoff, Jonathan; Levin, Simon A; Pacala, Stephen W

    2007-12-01

    We use a two-species model of plant competition to explore the effect of intraspecific variation on community dynamics. The competitive ability ("performance") of each individual is assigned by an independent random draw from a species-specific probability distribution. If the density of individuals competing for open space is high (e.g., because fecundity is high), species with high maximum (or large variance in) performance are favored, while if density is low, species with high typical (e.g., mean) performance are favored. If there is an interspecific mean-variance performance trade-off, stable coexistence can occur across a limited range of intermediate densities, but the stabilizing effect of this trade-off appears to be weak. In the absence of this trade-off, one species is superior. In this case, intraspecific variation can blur interspecific differences (i.e., shift the dynamics toward what would be expected in the neutral case), but the strength of this effect diminishes as competitor density increases. If density is sufficiently high, the inferior species is driven to extinction just as rapidly as in the case where there is no overlap in performance between species. Intraspecific variation can facilitate coexistence, but this may be relatively unimportant in maintaining diversity in most real communities.

  8. DEM L241, A SUPERNOVA REMNANT CONTAINING A HIGH-MASS X-RAY BINARY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seward, F. D.; Charles, P. A.; Foster, D. L.

    2012-11-10

    A Chandra observation of the Large Magellanic Cloud supernova remnant DEM L241 reveals an interior unresolved source which is probably an accretion-powered binary. The optical counterpart is an O5III(f) star making this a high-mass X-ray binary with an orbital period likely to be of the order of tens of days. Emission from the remnant interior is thermal and spectral information is used to derive density and mass of the hot material. Elongation of the remnant is unusual and possible causes of this are discussed. The precursor star probably had mass >25 M {sub Sun}.

  9. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  10. Spatial patterns and biodiversity in off-lattice simulations of a cyclic three-species Lotka-Volterra model

    NASA Astrophysics Data System (ADS)

    Avelino, P. P.; Bazeia, D.; Losano, L.; Menezes, J.; de Oliveira, B. F.

    2018-02-01

    Stochastic simulations of cyclic three-species spatial predator-prey models are usually performed in square lattices with nearest-neighbour interactions starting from random initial conditions. In this letter we describe the results of off-lattice Lotka-Volterra stochastic simulations, showing that the emergence of spiral patterns does occur for sufficiently high values of the (conserved) total density of individuals. We also investigate the dynamics in our simulations, finding an empirical relation characterizing the dependence of the characteristic peak frequency and amplitude on the total density. Finally, we study the impact of the total density on the extinction probability, showing how a low population density may jeopardize biodiversity.

  11. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  12. Fish community changes in the St. Louis River estuary, Lake Superior, 1989-1996: Is it ruffe or population dynamics?

    USGS Publications Warehouse

    Bronte, Charles R.; Evrard, Lori M.; Brown, William P.; Mayo, Kathleen R.; Edwards, Andrew J.

    1998-01-01

    Ruffe (Gymnocephalus cernuus) have been implicated in density declines of native species through egg predation and competition for food in some European waters where they were introduced. Density estimates for ruffe and principal native fishes in the St. Louis River estuary (western Lake Superior) were developed for 1989 to 1996 to measure changes in the fish community in response to an unintentional introduction of ruffe. During the study, ruffe density increased and the densities of several native species decreased. The reductions of native stocks to the natural population dynamics of the same species from Chequamegon Bay, Lake Superior (an area with very few ruffe) were developed, where there was a 24-year record of density. Using these data, short- and long-term variations in catch and correlations among species within years were compared, and species-specific distributions were developed of observed trends in abundance of native fishes in Chequamegon Bay indexed by the slopes of densities across years. From these distributions and our observed trend-line slopes from the St. Louis River, probabilities of measuring negative change at the magnitude observed in the St. Louis River were estimated. Compared with trends in Chequamegon Bay, there was a high probability of obtaining the negative slopes measured for most species, which suggests natural population dynamics could explain, the declines rather than interactions with ruffe. Variable recruitment, which was not related to ruffe density, and associated density-dependent changes in mortality likely were responsible for density declines of native species.

  13. Recombination dynamics in In{sub x}Ga{sub 1−x}N quantum wells—Contribution of excited subband recombination to carrier leakage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulz, T.; Markurt, T.; Albrecht, M.

    2014-11-03

    The recombination dynamics of In{sub x}Ga{sub 1−x}N single quantum wells are investigated. By comparing the photoluminescence (PL) decay spectra with simulated emission spectra obtained by a Schrödinger-Poisson approach, we give evidence that recombination from higher subbands contributes the emission of the quantum well at high excitation densities. This recombination path appears as a shoulder on the high energy side of the spectrum at high charge carrier densities and exhibits decay in the range of ps. Due to the lower confinement of the excited subband states, a distinct proportion of the probability density function lies outside the quantum well, thus contributingmore » to charge carrier loss. By estimating the current density in our time resolved PL experiments, we show that the onset of this loss mechanism occurs in the droop relevant regime above 20 A/cm{sup 2}.« less

  14. Spatial correlations and probability density function of the phase difference in a developed speckle-field: numerical and natural experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mysina, N Yu; Maksimova, L A; Ryabukho, V P

    Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less

  15. Study of discharge cleaning process in JIPP T-2 Torus by residual gas analyzer

    NASA Astrophysics Data System (ADS)

    Noda, N.; Hirokura, S.; Taniguchi, Y.; Tanahashi, S.

    1982-12-01

    During discharge cleaning, decay time of water vapor pressure changes when the pressure reaches a certain level. A long decay time observed in the later phase can be interpreted as a result of a slow deoxidization rate of chromium oxide, which may dominate the cleaning process in this phase. Optimization of plasma density for the cleaning is discussed comparing the experimental results on density dependence of water vapor pressure with a result based on a zero dimensional calculation for particle balance. One of the essential points for effective cleaning is the raising of the electron density of the plasma high enough that the dissociation loss rate of H2O is as large as the sticking loss rate. A density as high as 10 to the 11th power/cu cm is required for a clean surface condition where sticking probability is presumed to be around 0.5.

  16. Simulation study on characteristics of long-range interaction in randomly asymmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying

    2015-04-01

    In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.

  17. Probabilistic distribution and stochastic P-bifurcation of a hybrid energy harvester under colored noise

    NASA Astrophysics Data System (ADS)

    Mokem Fokou, I. S.; Nono Dueyou Buckjohn, C.; Siewe Siewe, M.; Tchawoua, C.

    2018-03-01

    In this manuscript, a hybrid energy harvesting system combining piezoelectric and electromagnetic transduction and subjected to colored noise is investigated. By using the stochastic averaging method, the stationary probability density functions of amplitudes are obtained and reveal interesting dynamics related to the long term behavior of the device. From stationary probability densities, we discuss the stochastic bifurcation through the qualitative change which shows that noise intensity, correlation time and other system parameters can be treated as bifurcation parameters. Numerical simulations are made for a comparison with analytical findings. The Mean first passage time (MFPT) is numerical provided in the purpose to investigate the system stability. By computing the Mean residence time (TMR), we explore the stochastic resonance phenomenon; we show how it is related to the correlation time of colored noise and high output power.

  18. N -tag probability law of the symmetric exclusion process

    NASA Astrophysics Data System (ADS)

    Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb

    2018-06-01

    The symmetric exclusion process (SEP), in which particles hop symmetrically on a discrete line with hard-core constraints, is a paradigmatic model of subdiffusion in confined systems. This anomalous behavior is a direct consequence of strong spatial correlations induced by the requirement that the particles cannot overtake each other. Even if this fact has been recognized qualitatively for a long time, up to now there has been no full quantitative determination of these correlations. Here we study the joint probability distribution of an arbitrary number of tagged particles in the SEP. We determine analytically its large-time limit for an arbitrary density of particles, and its full dynamics in the high-density limit. In this limit, we obtain the time-dependent large deviation function of the problem and unveil a universal scaling form shared by the cumulants.

  19. Spacecraft Collision Avoidance

    NASA Astrophysics Data System (ADS)

    Bussy-Virat, Charles

    The rapid increase of the number of objects in orbit around the Earth poses a serious threat to operational spacecraft and astronauts. In order to effectively avoid collisions, mission operators need to assess the risk of collision between the satellite and any other object whose orbit is likely to approach its trajectory. Several algorithms predict the probability of collision but have limitations that impair the accuracy of the prediction. An important limitation is that uncertainties in the atmospheric density are usually not taken into account in the propagation of the covariance matrix from current epoch to closest approach time. The Spacecraft Orbital Characterization Kit (SpOCK) was developed to accurately predict the positions and velocities of spacecraft. The central capability of SpOCK is a high accuracy numerical propagator of spacecraft orbits and computations of ancillary parameters. The numerical integration uses a comprehensive modeling of the dynamics of spacecraft in orbit that includes all the perturbing forces that a spacecraft is subject to in orbit. In particular, the atmospheric density is modeled by thermospheric models to allow for an accurate representation of the atmospheric drag. SpOCK predicts the probability of collision between two orbiting objects taking into account the uncertainties in the atmospheric density. Monte Carlo procedures are used to perturb the initial position and velocity of the primary and secondary spacecraft from their covariance matrices. Developed in C, SpOCK supports parallelism to quickly assess the risk of collision so it can be used operationally in real time. The upper atmosphere of the Earth is strongly driven by the solar activity. In particular, abrupt transitions from slow to fast solar wind cause important disturbances of the atmospheric density, hence of the drag acceleration that spacecraft are subject to. The Probability Distribution Function (PDF) model was developed to predict the solar wind speed five days in advance. In particular, the PDF model is able to predict rapid enhancements in the solar wind speed. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. En-semble forecasts provide the forecasters with an estimation of the uncertainty in the prediction, which can be used to derive uncertainties in the atmospheric density and in the drag acceleration. The dissertation then demonstrates that uncertainties in the atmospheric density result in large uncertainties in the prediction of the probability of collision. As an example, the effects of a geomagnetic storm on the probability of collision are illustrated. The research aims at providing tools and analyses that help understand and predict the effects of uncertainties in the atmospheric density on the probability of collision. The ultimate motivation is to support mission operators in making the correct decision with regard to a potential collision avoidance maneuver by providing an uncertainty on the prediction of the probability of collision instead of a single value. This approach can help avoid performing unnecessary costly maneuvers, while making sure that the risk of collision is fully evaluated.

  20. On the probability distribution function of the mass surface density of molecular clouds. II.

    NASA Astrophysics Data System (ADS)

    Fischera, Jörg

    2014-11-01

    The probability distribution function (PDF) of the mass surface density of molecular clouds provides essential information about the structure of molecular cloud gas and condensed structures out of which stars may form. In general, the PDF shows two basic components: a broad distribution around the maximum with resemblance to a log-normal function, and a tail at high mass surface densities attributed to turbulence and self-gravity. In a previous paper, the PDF of condensed structures has been analyzed and an analytical formula presented based on a truncated radial density profile, ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 with central density ρc and inner radius r0, widely used in astrophysics as a generalization of physical density profiles. In this paper, the results are applied to analyze the PDF of self-gravitating, isothermal, pressurized, spherical (Bonnor-Ebert spheres) and cylindrical condensed structures with emphasis on the dependence of the PDF on the external pressure pext and on the overpressure q-1 = pc/pext, where pc is the central pressure. Apart from individual clouds, we also consider ensembles of spheres or cylinders, where effects caused by a variation of pressure ratio, a distribution of condensed cores within a turbulent gas, and (in case of cylinders) a distribution of inclination angles on the mean PDF are analyzed. The probability distribution of pressure ratios q-1 is assumed to be given by P(q-1) ∝ q-k1/ (1 + (q0/q)γ)(k1 + k2) /γ, where k1, γ, k2, and q0 are fixed parameters. The PDF of individual spheres with overpressures below ~100 is well represented by the PDF of a sphere with an analytical density profile with n = 3. At higher pressure ratios, the PDF at mass surface densities Σ ≪ Σ(0), where Σ(0) is the central mass surface density, asymptotically approaches the PDF of a sphere with n = 2. Consequently, the power-law asymptote at mass surface densities above the peak steepens from Psph(Σ) ∝ Σ-2 to Psph(Σ) ∝ Σ-3. The corresponding asymptote of the PDF of cylinders for the large q-1 is approximately given by Pcyl(Σ) ∝ Σ-4/3(1 - (Σ/Σ(0))2/3)-1/2. The distribution of overpressures q-1 produces a power-law asymptote at high mass surface densities given by ∝ Σ-2k2 - 1 (spheres) or ∝ Σ-2k2 (cylinders). Appendices are available in electronic form at http://www.aanda.org

  1. Unstable density distribution associated with equatorial plasma bubble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.

    2016-04-15

    In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less

  2. Estimating population density and connectivity of American mink using spatial capture-recapture

    USGS Publications Warehouse

    Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.

    2016-01-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  3. Estimating population density and connectivity of American mink using spatial capture-recapture.

    PubMed

    Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P

    2016-06-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  4. Packing microstructure and local density variations of experimental and computational pebble beds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auwerda, G. J.; Kloosterman, J. L.; Lathouwers, D.

    2012-07-01

    In pebble bed type nuclear reactors the fuel is contained in graphite pebbles, which form a randomly stacked bed with a non-uniform packing density. These variations can influence local coolant flow and power density and are a possible cause of hotspots. To analyse local density variations computational methods are needed that can generate randomly stacked pebble beds with a realistic packing structure on a pebble-to-pebble level. We first compare various properties of the local packing structure of a computed bed with those of an image made using computer aided X-ray tomography, looking at properties in the bulk of the bedmore » and near the wall separately. Especially for the bulk of the bed, properties of the computed bed show good comparison with the scanned bed and with literature, giving confidence our method generates beds with realistic packing microstructure. Results also show the packing structure is different near the wall than in the bulk of the bed, with pebbles near the wall forming ordered layers similar to hexagonal close packing. Next, variations in the local packing density are investigated by comparing probability density functions of the packing fraction of small clusters of pebbles throughout the bed. Especially near the wall large variations in local packing fractions exists, with a higher probability for both clusters of pebbles with low (<0.6) and high (>0.65) packing fraction, which could significantly affect flow rates and, together with higher power densities, could result in hotspots. (authors)« less

  5. A Two-Piece Microkeratome-Assisted Mushroom Keratoplasty Improves the Outcomes and Survival of Grafts Performed in Eyes with Diseased Stroma and Healthy Endothelium (An American Ophthalmological Society Thesis)

    PubMed Central

    Busin, Massimo; Madi, Silvana; Scorcia, Vincenzo; Santorum, Paolo; Nahum, Yoav

    2015-01-01

    Purpose: To test the hypothesis that a new microkeratome-assisted penetrating keratoplasty (PK) technique employing transplantation of a two-piece mushroom-shaped graft may result in better visual outcomes and graft survival rates than those of conventional PK. Methods: Retrospective chart review of 96 eyes at low risk and 76 eyes at high risk for immunologic rejection (all with full-thickness central corneal opacity and otherwise healthy endothelium) undergoing mushroom PK between 2004 and 2012 at our Institution. Outcome measures were best-corrected visual acuity (BCVA), refraction, corneal topography, endothelial cell density, graft rejection, and survival probability. Results: Five years postoperatively, BCVA of 20/40 and 20/20 was recorded in 100% and over 50% of eyes, respectively. Mean spherical equivalent of refractive error did not vary significantly over a 5-year period; astigmatism averaged always below 4 diopters, with no statistically significant change over time, and was of the regular type in over 90% of eyes. Endothelial cell density decreased to about 40% of the eye bank count 2 years after mushroom PK and did not change significantly thereafter. Five years postoperatively, probabilities of graft immunologic rejection and graft survival were below 5% and above 95%, respectively. There was no statistically significant difference in endothelial cell loss, graft rejection, and survival probability between low-risk and high-risk subgroups. Conclusions: Refractive and visual outcomes of mushroom PK compare favorably with those of conventional full-thickness keratoplasty. In eyes at high risk for immunologic rejection, mushroom PK provides a considerably higher probability of graft survival than conventional PK. PMID:26538771

  6. Study on length distribution of ramie fibers

    USDA-ARS?s Scientific Manuscript database

    The extra-long length of ramie fibers and the high variation in fiber length has a negative impact on the spinning processes. In order to better study the feature of ramie fiber length, in this research, the probability density function of the mixture model applied in the characterization of cotton...

  7. Self-focusing of a high current density ion beam extracted with concave electrodes in a low energy region around 150 eV.

    PubMed

    Hirano, Y; Kiyama, S; Koguchi, H; Sakakita, H

    2014-02-01

    Spontaneous self-focusing of ion beam with high current density (Jc ∼ 2 mA/cm(2), Ib ∼ 65 mA) in low energy region (∼150 eV) is observed in a hydrogen ion beam extracted from an ordinary bucket type ion source with three electrodes having concave shape (acceleration, deceleration, and grounded electrodes). The focusing appears abruptly in the beam energy region over ∼135-150 eV, and the Jc jumps up from 0.7 to 2 mA/cm(2). Simultaneously a strong electron flow also appears in the beam region. The electron flow has almost the same current density. Probably these electrons compensate the ion space charge and suppress the beam divergence.

  8. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  9. Partitioning the sources of demographic variation reveals density-dependent nest predation in an island bird population

    PubMed Central

    Sofaer, Helen R; Sillett, T Scott; Langin, Kathryn M; Morrison, Scott A; Ghalambor, Cameron K

    2014-01-01

    Ecological factors often shape demography through multiple mechanisms, making it difficult to identify the sources of demographic variation. In particular, conspecific density can influence both the strength of competition and the predation rate, but density-dependent competition has received more attention, particularly among terrestrial vertebrates and in island populations. A better understanding of how both competition and predation contribute to density-dependent variation in fecundity can be gained by partitioning the effects of density on offspring number from its effects on reproductive failure, while also evaluating how biotic and abiotic factors jointly shape demography. We examined the effects of population density and precipitation on fecundity, nest survival, and adult survival in an insular population of orange-crowned warblers (Oreothlypis celata) that breeds at high densities and exhibits a suite of traits suggesting strong intraspecific competition. Breeding density had a negative influence on fecundity, but it acted by increasing the probability of reproductive failure through nest predation, rather than through competition, which was predicted to reduce the number of offspring produced by successful individuals. Our results demonstrate that density-dependent nest predation can underlie the relationship between population density and fecundity even in a high-density, insular population where intraspecific competition should be strong. PMID:25077023

  10. Partitioning the sources of demographic variation reveals density-dependent nest predation in an island bird population.

    PubMed

    Sofaer, Helen R; Sillett, T Scott; Langin, Kathryn M; Morrison, Scott A; Ghalambor, Cameron K

    2014-07-01

    Ecological factors often shape demography through multiple mechanisms, making it difficult to identify the sources of demographic variation. In particular, conspecific density can influence both the strength of competition and the predation rate, but density-dependent competition has received more attention, particularly among terrestrial vertebrates and in island populations. A better understanding of how both competition and predation contribute to density-dependent variation in fecundity can be gained by partitioning the effects of density on offspring number from its effects on reproductive failure, while also evaluating how biotic and abiotic factors jointly shape demography. We examined the effects of population density and precipitation on fecundity, nest survival, and adult survival in an insular population of orange-crowned warblers (Oreothlypis celata) that breeds at high densities and exhibits a suite of traits suggesting strong intraspecific competition. Breeding density had a negative influence on fecundity, but it acted by increasing the probability of reproductive failure through nest predation, rather than through competition, which was predicted to reduce the number of offspring produced by successful individuals. Our results demonstrate that density-dependent nest predation can underlie the relationship between population density and fecundity even in a high-density, insular population where intraspecific competition should be strong.

  11. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    ERIC Educational Resources Information Center

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  12. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  13. Five willow varieties cultivated across diverse field environments reveal stem density variation associated with high tension wood abundance

    PubMed Central

    Berthod, Nicolas; Brereton, Nicholas J. B.; Pitre, Frédéric E.; Labrecque, Michel

    2015-01-01

    Sustainable and inexpensive production of biomass is necessary to make biofuel production feasible, but represents a challenge. Five short rotation coppice willow cultivars, selected for high biomass yield, were cultivated on sites at four diverse regions of Quebec in contrasting environments. Wood composition and anatomical traits were characterized. Tree height and stem diameter were measured to evaluate growth performance of the cultivars according to the diverse pedoclimatic conditions. Each cultivar showed very specific responses to its environment. While no significant variation in lignin content was observed between sites, there was variation between cultivars. Surprisingly, the pattern of substantial genotype variability in stem density was maintained across all sites. However, wood anatomy did differ between sites in a cultivar (producing high and low density wood), suggesting a probable response to an abiotic stress. Furthermore, twice as many cellulose-rich G-fibers, comprising over 50% of secondary xylem, were also found in the high density wood, a finding with potential to bring higher value to the lignocellulosic bioethanol industry. PMID:26583024

  14. Evaluating impacts using a BACI design, ratios, and a Bayesian approach with a focus on restoration.

    PubMed

    Conner, Mary M; Saunders, W Carl; Bouwes, Nicolaas; Jordan, Chris

    2015-10-01

    Before-after-control-impact (BACI) designs are an effective method to evaluate natural and human-induced perturbations on ecological variables when treatment sites cannot be randomly chosen. While effect sizes of interest can be tested with frequentist methods, using Bayesian Markov chain Monte Carlo (MCMC) sampling methods, probabilities of effect sizes, such as a ≥20 % increase in density after restoration, can be directly estimated. Although BACI and Bayesian methods are used widely for assessing natural and human-induced impacts for field experiments, the application of hierarchal Bayesian modeling with MCMC sampling to BACI designs is less common. Here, we combine these approaches and extend the typical presentation of results with an easy to interpret ratio, which provides an answer to the main study question-"How much impact did a management action or natural perturbation have?" As an example of this approach, we evaluate the impact of a restoration project, which implemented beaver dam analogs, on survival and density of juvenile steelhead. Results indicated the probabilities of a ≥30 % increase were high for survival and density after the dams were installed, 0.88 and 0.99, respectively, while probabilities for a higher increase of ≥50 % were variable, 0.17 and 0.82, respectively. This approach demonstrates a useful extension of Bayesian methods that can easily be generalized to other study designs from simple (e.g., single factor ANOVA, paired t test) to more complicated block designs (e.g., crossover, split-plot). This approach is valuable for estimating the probabilities of restoration impacts or other management actions.

  15. Electron beam emission from a diamond-amplifier cathode.

    PubMed

    Chang, Xiangyun; Wu, Qiong; Ben-Zvi, Ilan; Burrill, Andrew; Kewisch, Jorg; Rao, Triveni; Smedley, John; Wang, Erdong; Muller, Erik M; Busby, Richard; Dimitrov, Dimitre

    2010-10-15

    The diamond amplifier (DA) is a new device for generating high-current, high-brightness electron beams. Our transmission-mode tests show that, with single-crystal, high-purity diamonds, the peak current density is greater than 400  mA/mm², while its average density can be more than 100  mA/mm². The gain of the primary electrons easily exceeds 200, and is independent of their density within the practical range of DA applications. We observed the electron emission. The maximum emission gain measured was 40, and the bunch charge was 50  pC/0.5  mm². There was a 35% probability of the emission of an electron from the hydrogenated surface in our tests. We identified a mechanism of slow charging of the diamond due to thermal ionization of surface states that cancels the applied field within it. We also demonstrated that a hydrogenated diamond is extremely robust.

  16. Sulfonated Copper Phthalocyanine/Sulfonated Polysulfone Composite Membrane for Ionic Polymer Actuators with High Power Density and Fast Response Time.

    PubMed

    Kwon, Taehoon; Cho, Hyeongrae; Lee, Jang-Woo; Henkensmeier, Dirk; Kang, Youngjong; Koo, Chong Min

    2017-08-30

    Ionic polymer composite membranes based on sulfonated poly(arylene ether sulfone) (SPAES) and copper(II) phthalocyanine tetrasulfonic acid (CuPCSA) are assembled into bending ionic polymer actuators. CuPCSA is an organic filler with very high sulfonation degree (IEC = 4.5 mmol H + /g) that can be homogeneously dispersed on the molecular scale into the SPAES membrane, probably due to its good dispersibility in SPAES-containing solutions. SPAES/CuPCSA actuators exhibit larger ion conductivity (102 mS cm -1 ), tensile modulus (208 MPa), strength (101 MPa), and strain (1.21%), exceptionally faster response to electrical stimuli, and larger mechanical power density (3028 W m -3 ) than ever reported for ion-conducting polymer actuators. This outstanding actuation performance of SPAES/CuPCSA composite membrane actuators makes them attractive for next-generation transducers with high power density, which are currently developed, e.g., for underwater propulsion and endoscopic surgery.

  17. Speech intelligibility at high helium-oxygen pressures.

    PubMed

    Rothman, H B; Gelfand, R; Hollien, H; Lambertsen, C J

    1980-12-01

    Word-list intelligibility scores of unprocessed speech (mean of 4 subjects) were recorded in helium-oxygen atmospheres at stable pressures equivalent to 1600, 1400, 1200, 1000, 860, 690, 560, 392, and 200 fsw daring Predictive Studies IV-1975 by wide-bandwidth condenser microphones (frequency responses not degraded by increased gas density). Intelligibility scores were substantially lower in helium-oxygen a 200 fsw than in air at l ATA, but there was little difference between 200 fsw and 1600 fsw. A previously documented prominent decrease in intelligibility of speech between 200 or 600 fsw because of helium and pressure was probably due to degradation of microphone frequency response by high gas density.

  18. A prominent large high-density lipoprotein at birth enriched in apolipoprotein C-I identifies a new group of infancts of lower birth weight and younger gestational age

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwiterovich Jr., Peter O.; Cockrill, Steven L.; Virgil, Donna G.

    2003-10-01

    Because low birth weight is associated with adverse cardiovascular risk and death in adults, lipoprotein heterogeneity at birth was studied. A prominent, large high-density lipoprotein (HDL) subclass enriched in apolipoprotein C-I (apoC-I) was found in 19 percent of infants, who had significantly lower birth weights and younger gestational ages and distinctly different lipoprotein profiles than infants with undetectable, possible or probable amounts of apoC-I-enriched HDL. An elevated amount of an apoC-I-enriched HDL identifies a new group of low birth weight infants.

  19. The study of PDF turbulence models in combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    The accurate prediction of turbulent combustion is still beyond reach for today's computation techniques. It is the consensus of the combustion profession that the predictions of chemically reacting flow were poor if conventional turbulence models were used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature, pressure, and density produces excessively large errors. The probability density function (PDF) method is the only alternative at the present time that uses local instant values of the temperature, density, etc. in predicting chemical reaction rate, and thus it is the only viable approach for turbulent combustion calculations.

  20. Quantum Jeffreys prior for displaced squeezed thermal states

    NASA Astrophysics Data System (ADS)

    Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin

    1999-09-01

    It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.

  1. Identification of Stochastically Perturbed Autonomous Systems from Temporal Sequences of Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing

    2018-03-01

    The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.

  2. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    PubMed

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  3. Trapped surfaces and emergent curved space in the Bose-Hubbard model

    NASA Astrophysics Data System (ADS)

    Caravelli, Francesco; Hamma, Alioscia; Markopoulou, Fotini; Riera, Arnau

    2012-02-01

    A Bose-Hubbard model on a dynamical lattice was introduced in previous work as a spin system analogue of emergent geometry and gravity. Graphs with regions of high connectivity in the lattice were identified as candidate analogues of spacetime geometries that contain trapped surfaces. We carry out a detailed study of these systems and show explicitly that the highly connected subgraphs trap matter. We do this by solving the model in the limit of no back-reaction of the matter on the lattice, and for states with certain symmetries that are natural for our problem. We find that in this case the problem reduces to a one-dimensional Hubbard model on a lattice with variable vertex degree and multiple edges between the same two vertices. In addition, we obtain a (discrete) differential equation for the evolution of the probability density of particles which is closed in the classical regime. This is a wave equation in which the vertex degree is related to the local speed of propagation of probability. This allows an interpretation of the probability density of particles similar to that in analogue gravity systems: matter inside this analogue system sees a curved spacetime. We verify our analytic results by numerical simulations. Finally, we analyze the dependence of localization on a gradual, rather than abrupt, falloff of the vertex degree on the boundary of the highly connected region and find that matter is localized in and around that region.

  4. A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain

    DTIC Science & Technology

    2015-05-18

    approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a

  5. Desensitization shortens the high-quantal-content endplate current time course in frog muscle with intact cholinesterase.

    PubMed

    Giniatullin, R A; Talantova, M; Vyskocil, F

    1997-08-01

    1. The desensitization induced by bath-applied carbachol or acetylcholine (ACh) and potentiated by proadifen (SKF 525A) was studied in the frog sartorius with intact synaptic acetylcholinesterase (AChE). 2. The reduction in the density and number of postsynaptic receptors produced by desensitization lowered the amplitude of the endplate currents (EPCs) and shortened the EPC decay when the quantal content (m) of the EPC was about 170 and when multiple release of quanta at single active zones was highly probably. The shortening of high-quantal-content EPCs persisted for at least 15 min after the wash-out of agonists, at a time when the amplitude had recovered fully. 3. The decay times of the low-quantal-content EPCs recorded from preparations pretreated with 5 mM Mg2+ (m approximately 70) and single-quantum miniature endplate currents (MEPCs) were not affected by carbachol, ACh or proadifen. 4. The desensitization of ACh receptors potentiated by proadifen, prevented completely the 6- to 8-fold prolongation of EPC which was induced by neostigmine inhibition of synaptic AChE. 5. It is assumed that high-quantal-content EPCs increase the incidence of multiple quanta release at single active zones and the probability of repetitive binding of ACh molecules which leads to EPC prolongation. The shortening which persists after complete recovery of the amplitude during wash-out of the exogenous agonist is probably due to 'trapping' of ACh molecules onto rapidly desensitized receptors and the reduced density of functional AChRs during the quantum action.

  6. Positron annihilation study of the high- Tc (Bi,Pb) 2Sr 2Ca 2Cu 3O x superconductor

    NASA Astrophysics Data System (ADS)

    Lim, H. J.; Byrne, J. G.

    1997-03-01

    Positron lifetime spectroscopy (PLS) and positron Doppler-broadening spectroscopy (PDBS) were applied to the high- Tc lead-doped Bi 2Sr 2Ca 2Cu 3O x (BPSCCO 2223) superconductor as a function of temperature. Neither positron lifetimes nor Doppler parameters ( S, W, and{S}/{W}) showed significant change through Tc. This may result from having the highest positron density in the open BiO 2 double layers and no significant positron density in the superconducting CuO 2 layers where positrons, if mainly present, are known to be sensitive to the transition in other high- Tc superconductors. Doppler parameters showed that the probability of positron annihilations with core electrons in the lattice slightly increased and that the probability of positron annihilations with conduction electrons slightly decreased as temperature decreased from ambient temperature to 20 K. The lifetime associated with positron annihilations in the perfect lattice of the sample ( τ1) was 209 ps and, due to the annihilations at internal surfaces or voids in the sample ( τ2) was about 540 ps, independent of temperature. Finally, the mean lifetime for BSCCO 2223 was about 307 ps.

  7. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  8. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  9. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  10. Intermediate Pond Sizes Contain the Highest Density, Richness, and Diversity of Pond-Breeding Amphibians

    PubMed Central

    Semlitsch, Raymond D.; Peterman, William E.; Anderson, Thomas L.; Drake, Dana L.; Ousterhout, Brittany H.

    2015-01-01

    We present data on amphibian density, species richness, and diversity from a 7140-ha area consisting of 200 ponds in the Midwestern U.S. that represents most of the possible lentic aquatic breeding habitats common in this region. Our study includes all possible breeding sites with natural and anthropogenic disturbance processes that can be missing from studies where sampling intensity is low, sample area is small, or partial disturbance gradients are sampled. We tested whether pond area was a significant predictor of density, species richness, and diversity of amphibians and if values peaked at intermediate pond areas. We found that in all cases a quadratic model fit our data significantly better than a linear model. Because small ponds have a high probability of pond drying and large ponds have a high probability of fish colonization and accumulation of invertebrate predators, drying and predation may be two mechanisms driving the peak of density and diversity towards intermediate values of pond size. We also found that not all intermediate sized ponds produced many larvae; in fact, some had low amphibian density, richness, and diversity. Further analyses of the subset of ponds represented in the peak of the area distribution showed that fish, hydroperiod, invertebrate density, and canopy are additional factors that drive density, richness and diversity of ponds up or down, when extremely small or large ponds are eliminated. Our results indicate that fishless ponds at intermediate sizes are more diverse, produce more larvae, and have greater potential to recruit juveniles into adult populations of most species sampled. Further, hylid and chorus frogs are found predictably more often in ephemeral ponds whereas bullfrogs, green frogs, and cricket frogs are found most often in permanent ponds with fish. Our data increase understanding of what factors structure and maintain amphibian diversity across large landscapes. PMID:25906355

  11. Steady-state probability density function of the phase error for a DPLL with an integrate-and-dump device

    NASA Technical Reports Server (NTRS)

    Simon, M.; Mileant, A.

    1986-01-01

    The steady-state behavior of a particular type of digital phase-locked loop (DPLL) with an integrate-and-dump circuit following the phase detector is characterized in terms of the probability density function (pdf) of the phase error in the loop. Although the loop is entirely digital from an implementation standpoint, it operates at two extremely different sampling rates. In particular, the combination of a phase detector and an integrate-and-dump circuit operates at a very high rate whereas the loop update rate is very slow by comparison. Because of this dichotomy, the loop can be analyzed by hybrid analog/digital (s/z domain) techniques. The loop is modeled in such a general fashion that previous analyses of the Real-Time Combiner (RTC), Subcarrier Demodulator Assembly (SDA), and Symbol Synchronization Assembly (SSA) fall out as special cases.

  12. On the emergence of a generalised Gamma distribution. Application to traded volume in financial markets

    NASA Astrophysics Data System (ADS)

    Duarte Queirós, S. M.

    2005-08-01

    This letter reports on a stochastic dynamical scenario whose associated stationary probability density function is exactly a generalised form, with a power law instead of exponencial decay, of the ubiquitous Gamma distribution. This generalisation, also known as F-distribution, was empirically proposed for the first time to adjust for high-frequency stock traded volume distributions in financial markets and verified in experiments with granular material. The dynamical assumption presented herein is based on local temporal fluctuations of the average value of the observable under study. This proposal is related to superstatistics and thus to the current nonextensive statistical mechanics framework. For the specific case of stock traded volume, we connect the local fluctuations in the mean stock traded volume with the typical herding behaviour presented by financial traders. Last of all, NASDAQ 1 and 2 minute stock traded volume sequences and probability density functions are numerically reproduced.

  13. Factors affecting breeding season survival of Red-Headed Woodpeckers in South Carolina.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilgo, John, C.; Vukovich, Mark

    2011-11-18

    Red-headed woodpecker (Melanerpes erythrocephalus) populations have declined in the United States and Canada over the past 40 years. However, few demographic studies have been published on the species and none have addressed adult survival. During 2006-2007, we estimated survival probabilities of 80 radio-tagged red-headed woodpeckers during the breeding season in mature loblolly pine (Pinus taeda) forests in South Carolina. We used known-fate models in Program MARK to estimate survival within and between years and to evaluate the effects of foliar cover (number of available cover patches), snag density treatment (high density vs. low density), and sex and age of woodpeckers.more » Weekly survival probabilities followed a quadratic time trend, being lowest during mid-summer, which coincided with the late nestling and fledgling period. Avian predation, particularly by Cooper's (Accipiter cooperii) and sharp-shinned hawks (A. striatus), accounted for 85% of all mortalities. Our best-supported model estimated an 18-week breeding season survival probability of 0.72 (95% CI = 0.54-0.85) and indicated that the number of cover patches interacted with sex of woodpeckers to affect survival; females with few available cover patches had a lower probability of survival than either males or females with more cover patches. At the median number of cover patches available (n = 6), breeding season survival of females was 0.82 (95% CI = 0.54-0.94) and of males was 0.60 (95% CI = 0.42-0.76). The number of cover patches available to woodpeckers appeared in all 3 of our top models predicting weekly survival, providing further evidence that woodpecker survival was positively associated with availability of cover. Woodpecker survival was not associated with snag density. Our results suggest that protection of {ge}0.7 cover patches per ha during vegetation control activities in mature pine forests will benefit survival of this Partners In Flight Watch List species.« less

  14. Factors associated with stocked cutthroat trout populations in high-mountain lakes

    USGS Publications Warehouse

    Bailey, Paul E.; Hubert, W.A.

    2003-01-01

    High-mountain lakes provide important fisheries in the Rocky Mountains; therefore we sought to gain an understanding of the relationships among environmental factors, accessibility to anglers, stocking rates, and features of stocks of cutthroat trout Oncorhynchus clarki in high-mountain lakes of the Bighorn Mountains, Wyoming. We sampled fish with experimental gill nets, measured lake habitat features, and calculated factors affecting angler access among 19 lakes that lacked sufficient natural reproduction to support salmonid fisheries and that were stocked at 1-, 2-, or 4-year intervals with fingerling cutthroat trout. We found that angler accessibility was probably the primary factor affecting stock structure, whereas stocking rates affected the densities of cutthroat trout among lakes. The maximum number of years survived after stocking appeared to have the greatest affect on biomass and population structure. Our findings suggest that control of harvest and manipulation of stocking densities can affect the density, biomass, and structure of cutthroat trout stocks in high-elevation lakes.

  15. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  16. Continuous-time random-walk model for financial distributions

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume; Montero, Miquel; Weiss, George H.

    2003-02-01

    We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.

  17. The Independent Effects of Phonotactic Probability and Neighbourhood Density on Lexical Acquisition by Preschool Children

    ERIC Educational Resources Information Center

    Storkel, Holly L.; Lee, Su-Yeon

    2011-01-01

    The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…

  18. Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers

    ERIC Educational Resources Information Center

    MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara

    2013-01-01

    Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…

  19. Millimeter-wave Line Ratios and Sub-beam Volume Density Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroy, Adam K.; Gallagher, Molly; Usero, Antonio

    We explore the use of mm-wave emission line ratios to trace molecular gas density when observations integrate over a wide range of volume densities within a single telescope beam. For observations targeting external galaxies, this case is unavoidable. Using a framework similar to that of Krumholz and Thompson, we model emission for a set of common extragalactic lines from lognormal and power law density distributions. We consider the median density of gas that produces emission and the ability to predict density variations from observed line ratios. We emphasize line ratio variations because these do not require us to know themore » absolute abundance of our tracers. Patterns of line ratio variations have the potential to illuminate the high-end shape of the density distribution, and to capture changes in the dense gas fraction and median volume density. Our results with and without a high-density power law tail differ appreciably; we highlight better knowledge of the probability density function (PDF) shape as an important area. We also show the implications of sub-beam density distributions for isotopologue studies targeting dense gas tracers. Differential excitation often implies a significant correction to the naive case. We provide tabulated versions of many of our results, which can be used to interpret changes in mm-wave line ratios in terms of adjustments to the underlying density distributions.« less

  20. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  1. The statistics of peaks of Gaussian random fields. [cosmological density fluctuations

    NASA Technical Reports Server (NTRS)

    Bardeen, J. M.; Bond, J. R.; Kaiser, N.; Szalay, A. S.

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of 'upcrossing' points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima.

  2. Anisotropic electrical conduction and reduction in dangling-bond density for polycrystalline Si films prepared by catalytic chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Niikura, Chisato; Masuda, Atsushi; Matsumura, Hideki

    1999-07-01

    Polycrystalline Si (poly-Si) films with high crystalline fraction and low dangling-bond density were prepared by catalytic chemical vapor deposition (Cat-CVD), often called hot-wire CVD. Directional anisotropy in electrical conduction, probably due to structural anisotropy, was observed for Cat-CVD poly-Si films. A novel method to separately characterize both crystalline and amorphous phases in poly-Si films using anisotropic electrical conduction was proposed. On the basis of results obtained by the proposed method and electron spin resonance measurements, reduction in dangling-bond density for Cat-CVD poly-Si films was achieved using the condition to make the quality of the included amorphous phase high. The properties of Cat-CVD poly-Si films are found to be promising in solar-cell applications.

  3. Small and large wetland fragments are equally suited breeding sites for a ground-nesting passerine.

    PubMed

    Pasinelli, Gilberto; Mayer, Christian; Gouskov, Alexandre; Schiegg, Karin

    2008-06-01

    Large habitat fragments are generally thought to host more species and to offer more diverse and/or better quality habitats than small fragments. However, the importance of small fragments for population dynamics in general and for reproductive performance in particular is highly controversial. Using an information-theoretic approach, we examined reproductive performance and probability of local recruitment of color-banded reed buntings Emberiza schoeniclus in relation to the size of 18 wetland fragments in northeastern Switzerland over 4 years. We also investigated if reproductive performance and recruitment probability were density-dependent. None of the four measures of reproductive performance (laying date, nest failure probability, fledgling production per territory, fledgling condition) nor recruitment probability were found to be related to wetland fragment size. In terms of fledgling production, however, fragment size interacted with year, indicating that small fragments were better reproductive grounds in some years than large fragments. Reproductive performance and recruitment probability were not density-dependent. Our results suggest that small fragments are equally suited as breeding grounds for the reed bunting as large fragments and should therefore be managed to provide a habitat for this and other specialists occurring in the same habitat. Moreover, large fragments may represent sinks in specific years because a substantial percentage of all breeding pairs in our study area breed in large fragments, and reproductive failure in these fragments due to the regularly occurring floods may have a much stronger impact on regional population dynamics than comparable events in small fragments.

  4. Influence of distributed delays on the dynamics of a generalized immune system cancerous cells interactions model

    NASA Astrophysics Data System (ADS)

    Piotrowska, M. J.; Bodnar, M.

    2018-01-01

    We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.

  5. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  6. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  7. The high order dispersion analysis based on first-passage-time probability in financial markets

    NASA Astrophysics Data System (ADS)

    Liu, Chenggong; Shang, Pengjian; Feng, Guochen

    2017-04-01

    The study of first-passage-time (FPT) event about financial time series has gained broad research recently, which can provide reference for risk management and investment. In this paper, a new measurement-high order dispersion (HOD)-is developed based on FPT probability to explore financial time series. The tick-by-tick data of three Chinese stock markets and three American stock markets are investigated. We classify the financial markets successfully through analyzing the scaling properties of FPT probabilities of six stock markets and employing HOD method to compare the differences of FPT decay curves. It can be concluded that long-range correlation, fat-tailed broad probability density function and its coupling with nonlinearity mainly lead to the multifractality of financial time series by applying HOD method. Furthermore, we take the fluctuation function of multifractal detrended fluctuation analysis (MF-DFA) to distinguish markets and get consistent results with HOD method, whereas the HOD method is capable of fractionizing the stock markets effectively in the same region. We convince that such explorations are relevant for a better understanding of the financial market mechanisms.

  8. Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR)

    NASA Astrophysics Data System (ADS)

    Peters, Christina; Malz, Alex; Hlozek, Renée

    2018-01-01

    The Bayesian Estimation Applied to Multiple Species (BEAMS) framework employs probabilistic supernova type classifications to do photometric SN cosmology. This work extends BEAMS to replace high-confidence spectroscopic redshifts with photometric redshift probability density functions, a capability that will be essential in the era the Large Synoptic Survey Telescope and other next-generation photometric surveys where it will not be possible to perform spectroscopic follow up on every SN. We present the Supernova Cosmology Inference with Probabilistic Photometric Redshifts (SCIPPR) Bayesian hierarchical model for constraining the cosmological parameters from photometric lightcurves and host galaxy photometry, which includes selection effects and is extensible to uncertainty in the redshift-dependent supernova type proportions. We create a pair of realistic mock catalogs of joint posteriors over supernova type, redshift, and distance modulus informed by photometric supernova lightcurves and over redshift from simulated host galaxy photometry. We perform inference under our model to obtain a joint posterior probability distribution over the cosmological parameters and compare our results with other methods, namely: a spectroscopic subset, a subset of high probability photometrically classified supernovae, and reducing the photometric redshift probability to a single measurement and error bar.

  9. Redundancy and reduction: Speakers manage syntactic information density

    PubMed Central

    Florian Jaeger, T.

    2010-01-01

    A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141

  10. The difference between two random mixed quantum states: exact and asymptotic spectral analysis

    NASA Astrophysics Data System (ADS)

    Mejía, José; Zapata, Camilo; Botero, Alonso

    2017-01-01

    We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.

  11. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  12. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection.

    PubMed

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.

  13. RS-Forest: A Rapid Density Estimator for Streaming Anomaly Detection

    PubMed Central

    Wu, Ke; Zhang, Kun; Fan, Wei; Edwards, Andrea; Yu, Philip S.

    2015-01-01

    Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS-Forest to systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request. PMID:25685112

  14. Physiological responses to acid stress by Saccharomyces cerevisiae when applying high initial cell density

    PubMed Central

    2016-01-01

    High initial cell density is used to increase volumetric productivity and shorten production time in lignocellulosic hydrolysate fermentation. Comparison of physiological parameters in high initial cell density cultivation of Saccharomyces cerevisiae in the presence of acetic, formic, levulinic and cinnamic acids demonstrated general and acid-specific responses of cells. All the acids studied impaired growth and inhibited glycolytic flux, and caused oxidative stress and accumulation of trehalose. However, trehalose may play a role other than protecting yeast cells from acid-induced oxidative stress. Unlike the other acids, cinnamic acid did not cause depletion of cellular ATP, but abolished the growth of yeast on ethanol. Compared with low initial cell density, increasing initial cell density reduced the lag phase and improved the bioconversion yield of cinnamic acid during acid adaptation. In addition, yeast cells were able to grow at elevated concentrations of acid, probable due to the increase in phenotypic cell-to-cell heterogeneity in large inoculum size. Furthermore, the specific growth rate and the specific rates of glucose consumption and metabolite production were significantly lower than at low initial cell density, which was a result of the accumulation of a large fraction of cells that persisted in a viable but non-proliferating state. PMID:27620460

  15. Multiphoton ionization of many-electron atoms and highly-charged ions in intense laser fields: a relativistic time-dependent density functional theory approach

    NASA Astrophysics Data System (ADS)

    Tumakov, Dmitry A.; Telnov, Dmitry A.; Maltsev, Ilia A.; Plunien, Günter; Shabaev, Vladimir M.

    2017-10-01

    We develop an efficient numerical implementation of the relativistic time-dependent density functional theory (RTDDFT) to study multielectron highly-charged ions subject to intense linearly-polarized laser fields. The interaction with the electromagnetic field is described within the electric dipole approximation. The resulting time-dependent relativistic Kohn-Sham (RKS) equations possess an axial symmetry and are solved accurately and efficiently with the help of the time-dependent generalized pseudospectral method. As a case study, we calculate multiphoton ionization probabilities of the neutral argon atom and argon-like xenon ion. Relativistic effects are assessed by comparison of our present results with existing non-relativistic data.

  16. The effectiveness of tape playbacks in estimating Black Rail densities

    USGS Publications Warehouse

    Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys

  17. Spatial and Temporal Analysis of Eruption Locations, Compositions, and Styles in Northern Harrat Rahat, Kingdom of Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Dietterich, H. R.; Stelten, M. E.; Downs, D. T.; Champion, D. E.

    2017-12-01

    Harrat Rahat is a predominantly mafic, 20,000 km2 volcanic field in western Saudi Arabia with an elongate volcanic axis extending 310 km north-south. Prior mapping suggests that the youngest eruptions were concentrated in northernmost Harrat Rahat, where our new geologic mapping and geochronology reveal >300 eruptive vents with ages ranging from 1.2 Ma to a historic eruption in 1256 CE. Eruption compositions and styles vary spatially and temporally within the volcanic field, where extensive alkali basaltic lavas dominate, but more evolved compositions erupted episodically as clusters of trachytic domes and small-volume pyroclastic flows. Analysis of vent locations, compositions, and eruption styles shows the evolution of the volcanic field and allows assessment of the spatio-temporal probabilities of vent opening and eruption styles. We link individual vents and fissures to eruptions and their deposits using field relations, petrography, geochemistry, paleomagnetism, and 40Ar/39Ar and 36Cl geochronology. Eruption volumes and deposit extents are derived from geologic mapping and topographic analysis. Spatial density analysis with kernel density estimation captures vent densities of up to 0.2 %/km2 along the north-south running volcanic axis, decaying quickly away to the east but reaching a second, lower high along a secondary axis to the west. Temporal trends show slight younging of mafic eruption ages to the north in the past 300 ka, as well as clustered eruptions of trachytes over the past 150 ka. Vent locations, timing, and composition are integrated through spatial probability weighted by eruption age for each compositional range to produce spatio-temporal models of vent opening probability. These show that the next mafic eruption is most probable within the north end of the main (eastern) volcanic axis, whereas more evolved compositions are most likely to erupt within the trachytic centers further to the south. These vent opening probabilities, combined with corresponding eruption properties, can be used as the basis for lava flow and tephra fall hazard maps.

  18. A well-scaling natural orbital theory

    DOE PAGES

    Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto

    2016-11-01

    Here, we introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals ofmore » the oneparticle density matrix.« less

  19. A well-scaling natural orbital theory

    PubMed Central

    Gebauer, Ralph; Cohen, Morrel H.; Car, Roberto

    2016-01-01

    We introduce an energy functional for ground-state electronic structure calculations. Its variables are the natural spin-orbitals of singlet many-body wave functions and their joint occupation probabilities deriving from controlled approximations to the two-particle density matrix that yield algebraic scaling in general, and Hartree–Fock scaling in its seniority-zero version. Results from the latter version for small molecular systems are compared with those of highly accurate quantum-chemical computations. The energies lie above full configuration interaction calculations, close to doubly occupied configuration interaction calculations. Their accuracy is considerably greater than that obtained from current density-functional theory approximations and from current functionals of the one-particle density matrix. PMID:27803328

  20. Analysis of TPA Pulsed-Laser-Induced Single-Event Latchup Sensitive-Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Sternberg, Andrew L.; Kozub, John A.

    Two-photon absorption (TPA) testing is employed to analyze the laser-induced latchup sensitive-volume (SV) of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transitions from zero to one as the laser pulse energy increases. This variability is attributed to pulse-to-pulse variability, uncertainty in measurement of the pulse energy, and variation in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated from multiple pulses. The latchup probability data are well-described by a Weibull distribution. The results showmore » that the area between p-n-p-n cell structures is more sensitive than the p+ and n+ source areas, and locations far from the well contacts are more sensitive than those near the contact region. The transition from low probability of latchup to high probability is more abrupt near the source contacts than it is for the surrounding areas.« less

  1. Analysis of TPA Pulsed-Laser-Induced Single-Event Latchup Sensitive-Area

    DOE PAGES

    Wang, Peng; Sternberg, Andrew L.; Kozub, John A.; ...

    2017-12-07

    Two-photon absorption (TPA) testing is employed to analyze the laser-induced latchup sensitive-volume (SV) of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transitions from zero to one as the laser pulse energy increases. This variability is attributed to pulse-to-pulse variability, uncertainty in measurement of the pulse energy, and variation in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated from multiple pulses. The latchup probability data are well-described by a Weibull distribution. The results showmore » that the area between p-n-p-n cell structures is more sensitive than the p+ and n+ source areas, and locations far from the well contacts are more sensitive than those near the contact region. The transition from low probability of latchup to high probability is more abrupt near the source contacts than it is for the surrounding areas.« less

  2. Spatial organization of foreshocks as a tool to forecast large earthquakes.

    PubMed

    Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.

  3. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  4. High-density marker imputation accuracy in sixteen French cattle breeds.

    PubMed

    Hozé, Chris; Fouilloux, Marie-Noëlle; Venot, Eric; Guillaume, François; Dassonneville, Romain; Fritz, Sébastien; Ducrocq, Vincent; Phocas, Florence; Boichard, Didier; Croiseau, Pascal

    2013-09-03

    Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777,609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations.

  5. High-density marker imputation accuracy in sixteen French cattle breeds

    PubMed Central

    2013-01-01

    Background Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777 609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Methods Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Results Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. Conclusion In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations. PMID:24004563

  6. Ligand Electron Density Shape Recognition Using 3D Zernike Descriptors

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Prasad; Grandison, Scott; Cowtan, Kevin; Mak, Lora; Lawson, David M.; Morris, Richard J.

    We present a novel approach to crystallographic ligand density interpretation based on Zernike shape descriptors. Electron density for a bound ligand is expanded in an orthogonal polynomial series (3D Zernike polynomials) and the coefficients from this expansion are employed to construct rotation-invariant descriptors. These descriptors can be compared highly efficiently against large databases of descriptors computed from other molecules. In this manuscript we describe this process and show initial results from an electron density interpretation study on a dataset containing over a hundred OMIT maps. We could identify the correct ligand as the first hit in about 30 % of the cases, within the top five in a further 30 % of the cases, and giving rise to an 80 % probability of getting the correct ligand within the top ten matches. In all but a few examples, the top hit was highly similar to the correct ligand in both shape and chemistry. Further extensions and intrinsic limitations of the method are discussed.

  7. Estimating Density and Temperature Dependence of Juvenile Vital Rates Using a Hidden Markov Model

    PubMed Central

    McElderry, Robert M.

    2017-01-01

    Organisms in the wild have cryptic life stages that are sensitive to changing environmental conditions and can be difficult to survey. In this study, I used mark-recapture methods to repeatedly survey Anaea aidea (Nymphalidae) caterpillars in nature, then modeled caterpillar demography as a hidden Markov process to assess if temporal variability in temperature and density influence the survival and growth of A. aidea over time. Individual encounter histories result from the joint likelihood of being alive and observed in a particular stage, and I have included hidden states by separating demography and observations into parallel and independent processes. I constructed a demographic matrix containing the probabilities of all possible fates for each stage, including hidden states, e.g., eggs and pupae. I observed both dead and live caterpillars with high probability. Peak caterpillar abundance attracted multiple predators, and survival of fifth instars declined as per capita predation rate increased through spring. A time lag between predator and prey abundance was likely the cause of improved fifth instar survival estimated at high density. Growth rates showed an increase with temperature, but the preferred model did not include temperature. This work illustrates how state-space models can include unobservable stages and hidden state processes to evaluate how environmental factors influence vital rates of cryptic life stages in the wild. PMID:28505138

  8. The electron localization as the information content of the conditional pair density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbina, Andres S.; Torres, F. Javier; Universidad San Francisco de Quito

    2016-06-28

    In the present work, the information gained by an electron for “knowing” about the position of another electron with the same spin is calculated using the Kullback-Leibler divergence (D{sub KL}) between the same-spin conditional pair probability density and the marginal probability. D{sub KL} is proposed as an electron localization measurement, based on the observation that regions of the space with high information gain can be associated with strong correlated localized electrons. Taking into consideration the scaling of D{sub KL} with the number of σ-spin electrons of a system (N{sup σ}), the quantity χ = (N{sup σ} − 1) D{sub KL}f{submore » cut} is introduced as a general descriptor that allows the quantification of the electron localization in the space. f{sub cut} is defined such that it goes smoothly to zero for negligible densities. χ is computed for a selection of atomic and molecular systems in order to test its capability to determine the region in space where electrons are localized. As a general conclusion, χ is able to explain the electron structure of molecules on the basis of chemical grounds with a high degree of success and to produce a clear differentiation of the localization of electrons that can be traced to the fluctuation in the average number of electrons in these regions.« less

  9. Brownian motion surviving in the unstable cubic potential and the role of Maxwell's demon

    NASA Astrophysics Data System (ADS)

    Ornigotti, Luca; Ryabov, Artem; Holubec, Viktor; Filip, Radim

    2018-03-01

    The trajectories of an overdamped particle in a highly unstable potential diverge so rapidly, that the variance of position grows much faster than its mean. A description of the dynamics by moments is therefore not informative. Instead, we propose and analyze local directly measurable characteristics, which overcome this limitation. We discuss the most probable particle position (position of the maximum of the probability density) and the local uncertainty in an unstable cubic potential, V (x ) ˜x3 , both in the transient regime and in the long-time limit. The maximum shifts against the acting force as a function of time and temperature. Simultaneously, the local uncertainty does not increase faster than the observable shift. In the long-time limit, the probability density naturally attains a quasistationary form. We interpret this process as a stabilization via the measurement-feedback mechanism, the Maxwell demon, which works as an entropy pump. The rules for measurement and feedback naturally arise from the basic properties of the unstable dynamics. All reported effects are inherent in any unstable system. Their detailed understanding will stimulate the development of stochastic engines and amplifiers and, later, their quantum counterparts.

  10. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  11. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  12. Effect of stand density and structure on the abundance of northern red oak advance reproduction

    Treesearch

    Gary W. Miller

    1997-01-01

    Regenerating northern red oak (Quercus rubra L.) on high-quality growing sites is a continuing problem in the central Appalachian region. Competing species usually exhibit faster height growth after regeneration harvests compared to oak reproduction. The probability of advance oak reproduction becoming codominant in the new stand is positively...

  13. Self-organization of critical behavior in controlled general queueing models

    NASA Astrophysics Data System (ADS)

    Blanchard, Ph.; Hongler, M.-O.

    2004-03-01

    We consider general queueing models of the (G/G/1) type with service times controlled by the busy period. For feedback control mechanisms driving the system to very high traffic load, it is shown the busy period probability density exhibits a generic - {3}/{2} power law which is a typical mean field behavior of SOC models.

  14. Defense Conversion Redirecting R and D

    DTIC Science & Technology

    1993-05-01

    agree that maglev or high members aerospace companies, utilities, univer- speed rail systems are probably limited to a few sities, small high tech...200 years. Even maglev a 3-year period for France’s TGV with a manufac- trains, long the favorite technology of the future turing workforce for the...population density. Maglev might parts of the United States, but on the basis of the contribute to the advance of some technologies, preliminary

  15. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  16. Peculiarities of biological action of hadrons of space radiation.

    PubMed

    Akoev, I G; Yurov, S S

    1975-01-01

    Biological investigations in space enable one to make a significant contribution on high-energy hadrons to biological effects under the influence of factors of space flights. Physical and molecular principles of the action of high-energy hadrons are analysed. Genetic and somatic hadron effects produced by the secondary radiation from 70 GeV protons have been studied experimentally. The high biological effectiveness of hadrons, great variability in biological effects, and specifically of their action, are associated with strong interactions of high-energy hadrons. These are the probability of nuclear interaction with any atom nucleus, generation of a great number of secondary particles (among them, probably, highly effective multicharged and heavy nuclei, antiprotons, pi(-)-mesons), and the spatial distribution of secondary particles as a narrow cone with extremely high density of particles in its first part. The secondary radiation generated by high- and superhigh-energy hadrons upon their interaction with the spaceship is likely to be the greatest hazard of radiation to the crew during space flights.

  17. Probability density of tunneled carrier states near heterojunctions calculated numerically by the scattering method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wampler, William R.; Myers, Samuel M.; Modine, Normand A.

    2017-09-01

    The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.

  18. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  19. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  20. Downlink Probability Density Functions for EOS-McMurdo Sound

    NASA Technical Reports Server (NTRS)

    Christopher, P.; Jackson, A. H.

    1996-01-01

    The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.

  1. Word Recognition and Nonword Repetition in Children with Language Disorders: The Effects of Neighborhood Density, Lexical Frequency, and Phonotactic Probability

    ERIC Educational Resources Information Center

    Rispens, Judith; Baker, Anne; Duinmeijer, Iris

    2015-01-01

    Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…

  2. A Novel Strategy for Numerical Simulation of High-speed Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Sheikhi, M. R. H.; Drozda, T. G.; Givi, P.

    2003-01-01

    The objective of this research is to improve and implement the filtered mass density function (FDF) methodology for large eddy simulation (LES) of high-speed reacting turbulent flows. We have just completed Year 1 of this research. This is the Final Report on our activities during the period: January 1, 2003 to December 31, 2003. 2002. In the efforts during the past year, LES is conducted of the Sandia Flame D, which is a turbulent piloted nonpremixed methane jet flame. The subgrid scale (SGS) closure is based on the scalar filtered mass density function (SFMDF) methodology. The SFMDF is basically the mass weighted probability density function (PDF) of the SGS scalar quantities. For this flame (which exhibits little local extinction), a simple flamelet model is used to relate the instantaneous composition to the mixture fraction. The modelled SFMDF transport equation is solved by a hybrid finite-difference/Monte Carlo scheme.

  3. Application of the coral health chart to determine bleaching status of Acropora downingi in a subtropical coral reef

    NASA Astrophysics Data System (ADS)

    Oladi, Mahshid; Shokri, Mohammad Reza; Rajabi-Maham, Hassan

    2017-06-01

    The `Coral Health Chart' has become a popular tool for monitoring coral bleaching worldwide. The scleractinian coral Acropora downingi (Wallace 1999) is highly vulnerable to temperature anomalies in the Persian Gulf. Our study tested the reliability of Coral Health Chart scores for the assessment of bleaching-related changes in the mitotic index (MI) and density of zooxanthellae cells in A. downingi in Qeshm Island, the Persian Gulf. The results revealed that, at least under severe conditions, it can be used as an effective proxy for detecting changes in the density of normal, transparent, or degraded zooxanthellae and MI. However, its ability to discern changes in pigment concentration and total zooxanthellae density should be viewed with some caution in the Gulf region, probably because the high levels of environmental variability in this region result in inherent variations in the characteristics of zooxanthellae among "healthy" looking corals.

  4. Pleurochrysis pseudoroscoffensis (Prymnesiophyceae) blooms on the surface of the Salton Sea, California

    USGS Publications Warehouse

    Reifel, K.M.; McCoy, M.P.; Tiffany, M.A.; Rocke, T.E.; Trees, C.C.; Barlow, S.B.; Faulkner, D.J.; Hurlbert, S.H.

    2001-01-01

    Dense populations of the coccolithophore Pleurochrysis pseudoroscoffensis were found in surface films at several locations around the Salton Sea in February-August, 1999. An unidentified coccolithophorid was also found in low densities in earlier studies of the lake (1955-1956). To our knowledge, this is the first record of this widespread marine species in any lake. Samples taken from surface films typically contained high densities of one or two other phytoplankton species as well as high densities of the coccolithophore. Presence or absence of specific algal pigments was used to validate direct cell counts. In a preliminary screen using a brine shrimp lethality assay, samples showed moderate activity. Extracts were then submitted to a mouse bioassay, and no toxic activity was observed. These results indicate that blooms of P. pseudoroscoffensis are probably not toxic to vertebrates and do not contribute to the various mortality events of birds and fish that occur in the Salton Sea.

  5. Calcium intercalation into layered fluorinated sodium iron phosphate

    DOE PAGES

    Lipson, Albert L.; Kim, Soojeong; Pan, Baofei; ...

    2017-10-09

    Here, the energy density and cost of battery systems could be improved by moving to alternative battery chemistries such as Ca-ion. However, in order to switch chemistries many problems need to be solved including the identification of cathode materials with high energy density, and electrolytes that can plate and strip calcium metal. Herein, the feasibility and cycling performance of Ca 2+ intercalation into a desodiated layered Na 2FePO 4F host is described. This is the first demonstration of Ca 2+ intercalation into a polyanionic framework, which implies that other polyanionic framework materials may be active for Ca 2+ intercalation. Althoughmore » substantial effort is still needed to identify a high energy density cathode material, this study and others demonstrate the feasibility of Ca 2+ intercalation into multiple materials making it more probable that such a cathode material can be found.« less

  6. Calcium intercalation into layered fluorinated sodium iron phosphate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipson, Albert L.; Kim, Soojeong; Pan, Baofei

    Here, the energy density and cost of battery systems could be improved by moving to alternative battery chemistries such as Ca-ion. However, in order to switch chemistries many problems need to be solved including the identification of cathode materials with high energy density, and electrolytes that can plate and strip calcium metal. Herein, the feasibility and cycling performance of Ca 2+ intercalation into a desodiated layered Na 2FePO 4F host is described. This is the first demonstration of Ca 2+ intercalation into a polyanionic framework, which implies that other polyanionic framework materials may be active for Ca 2+ intercalation. Althoughmore » substantial effort is still needed to identify a high energy density cathode material, this study and others demonstrate the feasibility of Ca 2+ intercalation into multiple materials making it more probable that such a cathode material can be found.« less

  7. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  8. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  9. Optimizing probability of detection point estimate demonstration

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2017-04-01

    The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.

  10. Evaluation of a pretest scoring system (4Ts) for the diagnosis of heparin-induced thrombocytopenia in a university hospital setting.

    PubMed

    Vatanparast, Rodina; Lantz, Sarah; Ward, Kristine; Crilley, Pamela Ann; Styler, Michael

    2012-11-01

    The initial diagnosis of heparin-induced thrombocytopenia (HIT) is made on clinical grounds because the assays with the highest sensitivity (eg, heparin-platelet factor 4 antibody enzyme-linked immunosorbent assay [ELISA]) and specificity (eg, serotonin release assay) may not be readily available. The clinical utility of the pretest scoring system, the 4Ts, was developed and validated by Lo et al in the Journal of Thrombosis and Haemostasis in 2006. The pretest scoring system looks at the degree and timing of thrombocytopenia, thrombosis, and the possibility of other etiologies. Based on the 4T score, patients can be categorized as having a high, intermediate, or low probability of having HIT. We conducted a retrospective study of 100 consecutive patients who were tested for HIT during their hospitalization at Hahnemann University Hospital (Philadelphia, PA) in 2009. Of the 100 patients analyzed, 72, 23, and 5 patients had 4T pretest probability scores of low, intermediate, and high, respectively. A positive HIT ELISA (optical density > 1.0 unit) was detected in 0 of 72 patients (0%) in the low probability group, in 5 of 23 patients (22%) in the intermediate probability group, and in 2 of 5 patients (40%) in the high probability group. The average turnaround time for the HIT ELISA was 4 to 5 days. Fourteen (19%) of the 72 patients with a low pretest probability of HIT were treated with a direct thrombin inhibitor. Ten (71%) of the 14 patients in the low probability group treated with a direct thrombin inhibitor had a major complication of bleeding requiring blood transfusion support. In this retrospective study, a low 4T score showed 100% correlation with a negative HIT antibody assay. We recommend incorporating the 4T scoring system into institutional core measures when assessing a patient with suspected HIT, selecting only patients with intermediate to high probability for therapeutic intervention, which may translate into reduced morbidity and lower health care costs.

  11. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  12. Numerical study of the influence of surface reaction probabilities on reactive species in an rf atmospheric pressure plasma containing humidity

    NASA Astrophysics Data System (ADS)

    Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah

    2018-01-01

    The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.

  13. Effects of heterogeneous traffic with speed limit zone on the car accidents

    NASA Astrophysics Data System (ADS)

    Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.

    2016-06-01

    Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.

  14. Maximum likelihood density modification by pattern recognition of structural motifs

    DOEpatents

    Terwilliger, Thomas C.

    2004-04-13

    An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.

  15. Method for removing atomic-model bias in macromolecular crystallography

    DOEpatents

    Terwilliger, Thomas C [Santa Fe, NM

    2006-08-01

    Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.

  16. An empirical probability model of detecting species at low densities.

    PubMed

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  17. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  18. Agricultural pesticide use in California: pesticide prioritization, use densities, and population distributions for a childhood cancer study.

    PubMed Central

    Gunier, R B; Harnly, M E; Reynolds, P; Hertz, A; Von Behren, J

    2001-01-01

    Several studies have suggested an association between childhood cancer and pesticide exposure. California leads the nation in agricultural pesticide use. A mandatory reporting system for all agricultural pesticide use in the state provides information on the active ingredient, amount used, and location. We calculated pesticide use density to quantify agricultural pesticide use in California block groups for a childhood cancer study. Pesticides with similar toxicologic properties (probable carcinogens, possible carcinogens, genotoxic compounds, and developmental or reproductive toxicants) were grouped together for this analysis. To prioritize pesticides, we weighted pesticide use by the carcinogenic and exposure potential of each compound. The top-ranking individual pesticides were propargite, methyl bromide, and trifluralin. We used a geographic information system to calculate pesticide use density in pounds per square mile of total land area for all United States census-block groups in the state. Most block groups (77%) averaged less than 1 pound per square mile of use for 1991-1994 for pesticides classified as probable human carcinogens. However, at the high end of use density (> 90th percentile), there were 493 block groups with more than 569 pounds per square mile. Approximately 170,000 children under 15 years of age were living in these block groups in 1990. The distribution of agricultural pesticide use and number of potentially exposed children suggests that pesticide use density would be of value for a study of childhood cancer. PMID:11689348

  19. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    NASA Astrophysics Data System (ADS)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude (σl<~1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which are physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S3, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated.

  20. Progress in the development of PDF turbulence models for combustion

    NASA Technical Reports Server (NTRS)

    Hsu, Andrew T.

    1991-01-01

    A combined Monte Carlo-computational fluid dynamic (CFD) algorithm was developed recently at Lewis Research Center (LeRC) for turbulent reacting flows. In this algorithm, conventional CFD schemes are employed to obtain the velocity field and other velocity related turbulent quantities, and a Monte Carlo scheme is used to solve the evolution equation for the probability density function (pdf) of species mass fraction and temperature. In combustion computations, the predictions of chemical reaction rates (the source terms in the species conservation equation) are poor if conventional turbulence modles are used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature produces excessively large errors. Moment closure models for the source terms have attained only limited success. The probability density function (pdf) method seems to be the only alternative at the present time that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus may be the only viable approach for more accurate turbulent combustion calculations. Assumed pdf's are useful in simple problems; however, for more general combustion problems, the solution of an evolution equation for the pdf is necessary.

  1. How likely are constituent quanta to initiate inflation?

    DOE PAGES

    Berezhiani, Lasha; Trodden, Mark

    2015-08-06

    In this study, we propose an intuitive framework for studying the problem of initial conditions in slow-roll inflation. In particular, we consider a universe at high, but sub-Planckian energy density and analyze the circumstances under which it is plausible for it to become dominated by inflated patches at late times, without appealing to the idea of self-reproduction. Our approach is based on defining a prior probability distribution for the constituent quanta of the pre-inflationary universe. To test the idea that inflation can begin under very generic circumstances, we make specific – yet quite general and well grounded – assumptions onmore » the prior distribution. As a result, we are led to the conclusion that the probability for a given region to ignite inflation at sub-Planckian densities is extremely small. Furthermore, if one chooses to use the enormous volume factor that inflation yields as an appropriate measure, we find that the regions of the universe which started inflating at densities below the self-reproductive threshold nevertheless occupy a negligible physical volume in the present universe as compared to those domains that have never inflated.« less

  2. Occupancy and abundance of the endangered yellowcheek darter in Arkansas

    USGS Publications Warehouse

    Magoulick, Daniel D.; Lynch, Dustin T.

    2015-01-01

    The Yellowcheek Darter (Etheostoma moorei) is a rare fish endemic to the Little Red River watershed in the Boston Mountains of northern Arkansas. Remaining populations of this species are geographically isolated and declining, and the species was listed in 2011 as federally endangered. Populations have declined, in part, due to intense seasonal stream drying and inundation of lower reaches by a reservoir. We used a kick seine sampling approach to examine distribution and abundance of Yellowcheek Darter populations in the Middle Fork and South Fork Little Red River. We used presence data to estimate occupancy rates and detection probability and examined relationships between Yellowcheek Darter density and environmental variables. The species was found at five Middle Fork and South Fork sites where it had previously been present in 2003–2004. Occupancy rates were >0.6 but with wide 95% CI, and where the darters occurred, densities were typical of other Ozark darters but highly variable. Detection probability and density were positively related to current velocity. Given that stream drying has become more extreme over the past 30 years and anthropogenic threats have increased, regular monitoring and active management may be required to reduce extinction risk of Yellowcheek Darter populations.

  3. Modeling utilization distributions in space and time

    USGS Publications Warehouse

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  4. Optoelectronics of inverted type-I CdS/CdSe core/crown quantum ring

    NASA Astrophysics Data System (ADS)

    Bose, Sumanta; Fan, Weijun; Zhang, Dao Hua

    2017-10-01

    Inverted type-I heterostructure core/crown quantum rings (QRs) are quantum-efficient luminophores, whose spectral characteristics are highly tunable. Here, we study the optoelectronic properties of type-I core/crown CdS/CdSe QRs in the zincblende phase—over contrasting lateral size and crown width. For this, we inspect their strain profiles, transition energies, transition matrix elements, spatial charge densities, electronic bandstructures, band-mixing probabilities, optical gain spectra, maximum optical gains, and differential optical gains. Our framework uses an effective-mass envelope function theory based on the 8-band k ṡ p method employing the valence force field model for calculating the atomic strain distributions. The gain calculations are based on the density-matrix equation and take into consideration the excitonic effects with intraband scattering. Variations in the QR lateral size and relative widths of core and crown (ergo the composition) affect their energy levels, band-mixing probabilities, optical transition matrix elements, emission wavelengths/intensities, etc. The optical gain of QRs is also strongly dimension and composition dependent with further dependency on the injection carrier density causing the band-filling effect. They also affect the maximum and differential gain at varying dimensions and compositions.

  5. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  6. Very High-Frequency (VHF) ionospheric scintillation fading measurements at Lima, Peru

    NASA Technical Reports Server (NTRS)

    Blank, H. A.; Golden, T. S.

    1972-01-01

    During the spring equinox of 1970, scintillating signals at VHF (136.4 MHz) were observed at Lima, Peru. The transmission originated from ATS 3 and was observed through a pair of antennas spaced 1200 feet apart on an east-west baseline. The empirical data were digitized, reduced, and analyzed. The results include amplitude probability density and distribution functions, time autocorrelation functions, cross correlation functions for the spaced antennas, and appropriate spectral density functions. Results show estimates of the statistics of the ground diffraction pattern to gain insight into gross ionospheric irregularity size, and irregularity velocity in the antenna planes.

  7. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  8. Noise reduction in heat-assisted magnetic recording of bit-patterned media by optimizing a high/low Tc bilayer structure

    NASA Astrophysics Data System (ADS)

    Muthsam, O.; Vogler, C.; Suess, D.

    2017-12-01

    It is assumed that heat-assisted magnetic recording is the recording technique of the future. For pure hard magnetic grains in high density media with an average diameter of 5 nm and a height of 10 nm, the switching probability is not sufficiently high for the use in bit-patterned media. Using a bilayer structure with 50% hard magnetic material with low Curie temperature and 50% soft magnetic material with high Curie temperature to obtain more than 99.2% switching probability leads to very large jitter. We propose an optimized material composition to reach a switching probability of Pswitch > 99.2% and simultaneously achieve the narrow transition jitter of pure hard magnetic material. Simulations with a continuous laser spot were performed with the atomistic simulation program VAMPIRE for a single cylindrical recording grain with a diameter of 5 nm and a height of 10 nm. Different configurations of soft magnetic material and different amounts of hard and soft magnetic material were tested and discussed. Within our analysis, a composition with 20% soft magnetic and 80% hard magnetic material reaches the best results with a switching probability Pswitch > 99.2%, an off-track jitter parameter σoff,80/20 = 0.46 nm and a down-track jitter parameter σdown,80/20 = 0.49 nm.

  9. Subalpine bumble bee foraging distances and densities in relation to flower availability.

    PubMed

    Elliott, Susan E

    2009-06-01

    Bees feed almost exclusively on nectar and pollen from flowers. However, little is known about how food availability limits bee populations, especially in high elevation areas. Foraging distances and relationships between forager densities and resource availability can provide insights into the potential for food limitation in mobile consumer populations. For example, if floral resources are limited, bee consumers should fly farther to forage, and they should be more abundant in areas with more flowers. I estimated subalpine bumble bee foraging distances by calculating forager recapture probabilities at increasing distances from eight marking locations. I measured forager and flower densities over the flowering season in six half-hectare plots. Because subalpine bumble bees have little time to build their colonies, they may forage over short distances and forager density may not be constrained by flower density. However, late in the season, when floral resources dwindle, foraging distances may increase, and there may be stronger relationships between forager and flower densities. Throughout the flowering season, marked bees were primarily found within 100 m (and never >1,000 m) from their original marking location, suggesting that they typically did not fly far to forage. Although the density of early season foraging queens increased with early-season flower density, the density of mid- and late-season workers and males did not vary with flower density. Short foraging distances and no relationships between mid- and late-season forager and flower densities suggest that high elevation bumble bees may have ample floral resources for colony growth reproduction.

  10. Integrating resource selection information with spatial capture--recapture

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.

    2013-01-01

    4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.

  11. Effect of Phonotactic Probability and Neighborhood Density on Word-Learning Configuration by Preschoolers with Typical Development and Specific Language Impairment

    ERIC Educational Resources Information Center

    Gray, Shelley; Pittman, Andrea; Weinhold, Juliet

    2014-01-01

    Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…

  12. Draft genome analysis provides insights into the fiber yield, crude protein biosynthesis, and vegetative growth of domesticated ramie (Boehmeria nivea L. Gaud).

    PubMed

    Liu, Chan; Zeng, Liangbin; Zhu, Siyuan; Wu, Lingqing; Wang, Yanzhou; Tang, Shouwei; Wang, Hongwu; Zheng, Xia; Zhao, Jian; Chen, Xiaorong; Dai, Qiuzhong; Liu, Touming

    2017-11-15

    Plentiful bast fiber, a high crude protein content, and vigorous vegetative growth make ramie a popular fiber and forage crop. Here, we report the draft genome of ramie, along with a genomic comparison and evolutionary analysis. The draft genome contained a sequence of approximately 335.6 Mb with 42,463 predicted genes. A high-density genetic map with 4,338 single nucleotide polymorphisms (SNPs) was developed and used to anchor the genome sequence, thus, creating an integrated genetic and physical map containing a 58.2-Mb genome sequence and 4,304 molecular markers. A genomic comparison identified 1,075 unique gene families in ramie, containing 4,082 genes. Among these unique genes, five were cellulose synthase genes that were specifically expressed in stem bark, and 3 encoded a WAT1-related protein, suggesting that they are probably related to high bast fiber yield. An evolutionary analysis detected 106 positively selected genes, 22 of which were related to nitrogen metabolism, indicating that they are probably responsible for the crude protein content and vegetative growth of domesticated varieties. This study is the first to characterize the genome and develop a high-density genetic map of ramie and provides a basis for the genetic and molecular study of this crop. © The Author 2017. Published by Oxford University Press on behalf of Kazusa DNA Research Institute.

  13. Associations of Alcohol Availability and Neighborhood Socioeconomic Characteristics With Drinking: Cross-Sectional Results From the Multi-Ethnic Study of Atherosclerosis (MESA).

    PubMed

    Brenner, Allison B; Diez Roux, Ana V; Barrientos-Gutierrez, Tonatiuh; Borrell, Luisa N

    2015-01-01

    Living in neighborhoods with a high density of alcohol outlets and socioeconomic disadvantage may increase residents' alcohol use. Few researchers have studied these exposures in relation to multiple types of alcohol use, including beverage-specific consumption, and how individual demographic factors influence these relationships. To examine the relationships of alcohol outlet density and neighborhood disadvantage with alcohol consumption, and to investigate differences in these associations by race/ethnicity and income. Using cross-sectional data (N = 5,873) from the Multi-ethnic Study of Atherosclerosis in 2002, we examine associations of residential alcohol outlet density and neighborhood socioeconomic disadvantage with current, total weekly and heaviest daily alcohol use in gender-specific regression models, as well as moderation by race/ethnicity and income. Drinking men living near high densities of alcohol outlets had 23%-29% more weekly alcohol use than men in low density areas. Among women who drank, those living near a moderate density of alcohol outlets consumed approximately 40% less liquor each week than those in low density areas, but higher outlet densities were associated with more wine consumption (35%-49%). Living in highly or moderately disadvantaged neighborhoods was associated with a lower probability of being a current drinker, but with higher rates of weekly beer consumption. Income moderated the relationship between neighborhood context and weekly alcohol use. Neighborhood disadvantage and alcohol outlet density may influence alcohol use with effects varying by gender and income. Results from this research may help target interventions and policy to groups most at risk for greater weekly consumption.

  14. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  15. Properties of the probability density function of the non-central chi-squared distribution

    NASA Astrophysics Data System (ADS)

    András, Szilárd; Baricz, Árpád

    2008-10-01

    In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.

  16. Effects of different management regimes on survival of northern red oak underplantings in the Ridge and Valley Province

    Treesearch

    Adam E. Regula; David W. McGill; Cynthia D. Huebner

    2015-01-01

    While dominant throughout much of the eastern United States, a recent decline in oak regeneration has merited substantial research. Ultimately, successful regeneration entails the establishment of advance reproduction of sufficient size and density to provide a high probability of ascendancy to dominant or co-dominant status. Potential prescriptions for achieving this...

  17. Rapid measurement of the three-dimensional distribution of leaf orientation and the leaf angle probability density function using terrestrial LiDAR scanning

    USDA-ARS?s Scientific Manuscript database

    Leaf orientation plays a fundamental role in many transport processes in plant canopies. At the plant or stand level, leaf orientation is often highly anisotropic and heterogeneous, yet most analyses neglect such complexity. In many cases, this is due to the difficulty in measuring the spatial varia...

  18. Satellite accelerometer measurements of neutral density and winds during geomagnetic storms

    NASA Technical Reports Server (NTRS)

    Marcos, F. A.; Forbes, J. M.

    1986-01-01

    A new thermospheric wind measurement technique is reported which is based on a Satellite Electrostatic Triaxial Accelerometer (SETA) system capable of accurately measuring accelerations in the satellite's in-track, cross-track and radial directions. Data obtained during two time periods are presented. The first data set describes cross-track winds measured between 170 and 210 km during a 5-day period (25 to 29 March 1979) of mostly high geomagnetic activity. In the second data set, cross-track winds and neutral densities from SETA and exospheric temperatures from the Millstone Hill incoherent scatter radar are examined during an isolated magnetic substorm occurring on 21 March 1979. A polar thermospheric wind circulation consisting of a two cell horizontal convection pattern is reflected in both sets of cross-track acceleration measurements. The density response is highly asymmetric with respect to its day/night behavior. Latitude structures of the density response at successive times following the substorm peak suggest the equatorward propagation of a disturbance with a phase speed between 300 and 600 m/s. A deep depression in the density at high latitudes (less than 70 deg) is evident in conjunction with this phenomenon. The more efficient propagation of the disturbance to lower latitudes during the night is probably due to the midnight surge effect.

  19. Investigation of MHD flow structure and fluctuations by potassium lineshape fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauman, L.E.

    1993-12-31

    Multiple Potassium D-line emission absorption spectra from a high temperature, coal-fired flow have been fit to a radiative transfer, boundary layer flow model. The results of fitting spectra from the aerodynamic duct of the Department of Energy Coal-Fired Flow Facility provide information about the thickness and shape of the thermal boundary layer and the bulk potassium seed atom density in a simulated magnetohydrodynamic channel flow. Probability distribution functions for the entire set of more than six thousand spectra clearly indicate the typical values and magnitude of fluctuations for the flow: core temperature of 2538 {plus_minus} 20 K, near wall temperaturemore » of 1945 {plus_minus} 135 K, boundary layer width of about 1 cm, and potassium seed atom density of (5.1 {plus_minus} 0.8)x 10{sup 22}/m{sup 3}. Probability distribution functions for selected times during the eight hours of measurements indicate occasional periods of unstable combustion. In addition, broadband particle parameters during the unstable start of the test may be related to differing particle and gas temperatures. The results clearly demonstrate the ability of lineshape fitting to provide valuable data for diagnosing the high speed turbulent flow.« less

  20. Irish study of high-density Schizophrenia families: Field methods and power to detect linkage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kendler, K.S.; Straub, R.E.; MacLean, C.J.

    Large samples of multiplex pedigrees will probably be needed to detect susceptibility loci for schizophrenia by linkage analysis. Standardized ascertainment of such pedigrees from culturally and ethnically homogeneous populations may improve the probability of detection and replication of linkage. The Irish Study of High-Density Schizophrenia Families (ISHDSF) was formed from standardized ascertainment of multiplex schizophrenia families in 39 psychiatric facilities covering over 90% of the population in Ireland and Northern Ireland. We here describe a phenotypic sample and a subset thereof, the linkage sample. Individuals were included in the phenotypic sample if adequate diagnostic information, based on personal interview and/ormore » hospital record, was available. Only individuals with available DNA were included in the linkage sample. Inclusion of a pedigree into the phenotypic sample required at least two first, second, or third degree relatives with non-affective psychosis (NAP), one of whom had schizophrenia (S) or poor-outcome schizoaffective disorder (PO-SAD). Entry into the linkage sample required DNA samples on at least two individuals with NAP, of whom at least one had S or PO-SAD. Affection was defined by narrow, intermediate, and broad criteria. 75 refs., 6 tabs.« less

  1. Proposal and Evaluation of BLE Discovery Process Based on New Features of Bluetooth 5.0.

    PubMed

    Hernández-Solana, Ángela; Perez-Diaz-de-Cerio, David; Valdovinos, Antonio; Valenzuela, Jose Luis

    2017-08-30

    The device discovery process is one of the most crucial aspects in real deployments of sensor networks. Recently, several works have analyzed the topic of Bluetooth Low Energy (BLE) device discovery through analytical or simulation models limited to version 4.x. Non-connectable and non-scannable undirected advertising has been shown to be a reliable alternative for discovering a high number of devices in a relatively short time period. However, new features of Bluetooth 5.0 allow us to define a variant on the device discovery process, based on BLE scannable undirected advertising events, which results in higher discovering capacities and also lower power consumption. In order to characterize this new device discovery process, we experimentally model the real device behavior of BLE scannable undirected advertising events. Non-detection packet probability, discovery probability, and discovery latency for a varying number of devices and parameters are compared by simulations and experimental measurements. We demonstrate that our proposal outperforms previous works, diminishing the discovery time and increasing the potential user device density. A mathematical model is also developed in order to easily obtain a measure of the potential capacity in high density scenarios.

  2. Proposal and Evaluation of BLE Discovery Process Based on New Features of Bluetooth 5.0

    PubMed Central

    2017-01-01

    The device discovery process is one of the most crucial aspects in real deployments of sensor networks. Recently, several works have analyzed the topic of Bluetooth Low Energy (BLE) device discovery through analytical or simulation models limited to version 4.x. Non-connectable and non-scannable undirected advertising has been shown to be a reliable alternative for discovering a high number of devices in a relatively short time period. However, new features of Bluetooth 5.0 allow us to define a variant on the device discovery process, based on BLE scannable undirected advertising events, which results in higher discovering capacities and also lower power consumption. In order to characterize this new device discovery process, we experimentally model the real device behavior of BLE scannable undirected advertising events. Non-detection packet probability, discovery probability, and discovery latency for a varying number of devices and parameters are compared by simulations and experimental measurements. We demonstrate that our proposal outperforms previous works, diminishing the discovery time and increasing the potential user device density. A mathematical model is also developed in order to easily obtain a measure of the potential capacity in high density scenarios. PMID:28867786

  3. Laser-induced incandescence measurements of soot in turbulent pool fires.

    PubMed

    Frederickson, Kraig; Kearney, Sean P; Grasser, Thomas W

    2011-02-01

    We present what we believe to be the first application of the laser-induced incandescence (LII) technique to large-scale fire testing. The construction of an LII instrument for fire measurements is presented in detail. Soot volume fraction imaging from 2 m diameter pool fires burning blended toluene/methanol liquid fuels is demonstrated along with a detailed report of measurement uncertainty in the challenging pool fire environment. Our LII instrument relies upon remotely located laser, optical, and detection systems and the insertion of water-cooled, fiber-bundle-coupled collection optics into the fire plume. Calibration of the instrument was performed using an ethylene/air laminar diffusion flame produced by a Santoro-type burner, which allowed for the extraction of absolute soot volume fractions from the LII images. Single-laser-shot two-dimensional images of the soot layer structure are presented with very high volumetric spatial resolution of the order of 10(-5) cm3. Probability density functions of the soot volume fraction fluctuations are constructed from the large LII image ensembles. The results illustrate a highly intermittent soot fluctuation field with potentially large macroscale soot structures and clipped soot probability densities.

  4. Estimating the influence of population density and dispersal behavior on the ability to detect and monitor Agrilus planipennis (Coleoptera: Buprestidae) populations.

    PubMed

    Mercader, R J; Siegert, N W; McCullough, D G

    2012-02-01

    Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.

  5. On Schrödinger's bridge problem

    NASA Astrophysics Data System (ADS)

    Friedland, S.

    2017-11-01

    In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.

  6. Ord's kangaroo rats living in floodplain habitats: Factors contributing to habitat attraction

    USGS Publications Warehouse

    Miller, M.S.; Wilson, K.R.; Andersen, D.C.

    2003-01-01

    High densities of an aridland granivore, Ord's kangaroo rat (Dipodomys ordii), have been documented in floodplain habitats along the Yampa River in northwestern Colorado. Despite a high probability of inundation and attendant high mortality during the spring flood period, the habitat is consistently recolonized. To understand factors that potentially make riparian habitats attractive to D. ordii, we compared density and spatial pattern of seeds, density of a competitor (western harvester ant, Pogonomyrmex occidentalis), and digging energetics within floodplain habitats and between floodplain and adjacent upland habitats. Seed density within the floodplain was greatest in the topographically high (rarely flooded) floodplain and lowest immediately after a spring flood in the topographically low (frequently flooded) floodplain. Seed densities in adjacent upland habitat that never floods were higher than the lowest floodplain habitat. In the low floodplain prior to flooding, seeds had a clumped spatial pattern, which D. ordii is adept at exploiting; after spring flooding, a more random pattern resulted. Populations of the western harvester ant were low in the floodplain relative to the upland. Digging by D. ordii was energetically less expensive in floodplain areas than in upland areas. Despite the potential for mortality due to annual spring flooding, the combination of less competition from harvester ants and lower energetic costs of digging might promote the use of floodplain habitat by D. ordii.

  7. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  8. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  9. STAR FORMATION IN TURBULENT MOLECULAR CLOUDS WITH COLLIDING FLOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsumoto, Tomoaki; Dobashi, Kazuhito; Shimoikura, Tomomi, E-mail: matsu@hosei.ac.jp

    2015-03-10

    Using self-gravitational hydrodynamical numerical simulations, we investigated the evolution of high-density turbulent molecular clouds swept by a colliding flow. The interaction of shock waves due to turbulence produces networks of thin filamentary clouds with a sub-parsec width. The colliding flow accumulates the filamentary clouds into a sheet cloud and promotes active star formation for initially high-density clouds. Clouds with a colliding flow exhibit a finer filamentary network than clouds without a colliding flow. The probability distribution functions (PDFs) for the density and column density can be fitted by lognormal functions for clouds without colliding flow. When the initial turbulence ismore » weak, the column density PDF has a power-law wing at high column densities. The colliding flow considerably deforms the PDF, such that the PDF exhibits a double peak. The stellar mass distributions reproduced here are consistent with the classical initial mass function with a power-law index of –1.35 when the initial clouds have a high density. The distribution of stellar velocities agrees with the gas velocity distribution, which can be fitted by Gaussian functions for clouds without colliding flow. For clouds with colliding flow, the velocity dispersion of gas tends to be larger than the stellar velocity dispersion. The signatures of colliding flows and turbulence appear in channel maps reconstructed from the simulation data. Clouds without colliding flow exhibit a cloud-scale velocity shear due to the turbulence. In contrast, clouds with colliding flow show a prominent anti-correlated distribution of thin filaments between the different velocity channels, suggesting collisions between the filamentary clouds.« less

  10. Statistics of intensity in adaptive-optics images and their usefulness for detection and photometry of exoplanets.

    PubMed

    Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C

    2010-11-01

    This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.

  11. Dendritic brushes under theta and poor solvent conditions

    NASA Astrophysics Data System (ADS)

    Gergidis, Leonidas N.; Kalogirou, Andreas; Charalambopoulos, Antonios; Vlahos, Costas

    2013-07-01

    The effects of solvent quality on the internal stratification of polymer brushes formed by dendron polymers up to third generation were studied by means of molecular dynamics simulations with Langevin thermostat. The distributions of polymer units, of the free ends, the radii of gyration, and the back folding probabilities of the dendritic spacers were studied at the macroscopic states of theta and poor solvent. For high grafting densities we observed a small decrease in the height of the brush as the solvent quality decreases. The internal stratification in theta solvent was similar to the one we found in good solvent, with two and in some cases three kinds of populations containing short dendrons with weakly extended spacers, intermediate-height dendrons, and tall dendrons with highly stretched spacers. The differences increase as the grafting density decreases and single dendron populations were evident in theta and poor solvent. In poor solvent at low grafting densities, solvent micelles, polymeric pinned lamellae, spherical and single chain collapsed micelles were observed. The scaling dependence of the height of the dendritic brush at high density brushes for both solvents was found to be in agreement with existing analytical results.

  12. The formation process of the flood type lamina in the Lake Mokoto, Hokkaido, Japan.

    NASA Astrophysics Data System (ADS)

    Seto, K.; Katsuki, K.; Takeshi, S.

    2017-12-01

    In the coastal area of the Sea of Okhotsk in the east part of Hokkaido located to subarctic zone, many brackish-water lakes are distributed. Lake Mokoto consist of organic mud with the lamination. The 09Mk-1C core at 2009. In the soft X-ray photograph, the cyclic lamina set is observed. The cyclic lamina set consists of low- and high-density lamina. According to the meteorological data in Abashiri region, the annually precipitation is high from August to September. Probably, the cyclic lamina set is formed by seasonal change of precipitation. In August 2016, it showed a precipitation of 425 mm which is about 4 times the average precipitation. In February 2017, the 10 cm class short core (17Mk-4SC core) was collected and the flood lamina was observed. Six layers showing different color were observed in top 6.5 cm. The first, third and fifth layers from the top are relatively light in color (L*value=17). The second, fourth and sixth layers are relatively dark in color (L*value=8). Thickness of the first to fourth layers is about 5 mm, but the thickness of the fifth layer reaches 4 cm. According to the observation of the soft X-ray photograph, the third and fifth layers were high-density lamina, and the others were low-density lamina. Because these layers were not observed in the 15Mk-3C core collected in March 2015, they were deposited after that. It is estimated that the third layer showing the high-density lamina is the sediment of the flood event in August 2016. This is supported by the fact that the total organic carbon (TOC) and total sulfur (TS) contents are diluted and the C/N ratio is relatively high value. Because this lamina is remarkable, it will be used as a future key bed. On the other hand, the fifth layer showing the high-density is a very thick. In this layer, the TS content is diluted and the C/N ratio is high, but the TOC content shows the highest value. This suggests that sediments with high TOC content flowed out from Mokoto River Basin. It has been reported that a large-scale artificial sediment discharge occurred in agricultural lands in the basin. This sediment was probably deposited at the downstream Lake Mokoto. It is suggested that a remarkable event layer is formed by sediment discharge due to slope collapse in basin.

  13. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  14. [Risk analysis of naphthalene pollution in soils of Tianjin].

    PubMed

    Yang, Yu; Shi, Xuan; Xu, Fu-liu; Tao, Shu

    2004-03-01

    Three approaches were applied and evaluated for probabilistic risk assessment of naphthalene in soils of Tianjin, China, based on the observed naphthalene concentration of 188 top soil samples from the area and LC50 of naphthalene to ten typical soil fauna species from the literature. It was found that the overlapping area of the two probability density functions of concentration and LC50 was 6.4%, the joint probability curve bend towards and very close to the bottom and left axis, and the calculated probability that exposure concentration exceeds LC50 of various species was as low as 1.67%, all indicating a very much acceptable risk of naphthalene to the soil fauna ecosystem and only some of very sensitive species or individual animals are threaten by localized extremely high concentration. The three approaches revealed similar results from different viewpoints.

  15. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  16. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  17. Effects of environmental covariates and density on the catchability of fish populations and interpretation of catch per unit effort trends

    USGS Publications Warehouse

    Korman, Josh; Yard, Mike

    2017-01-01

    Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.

  18. Wavefronts, actions and caustics determined by the probability density of an Airy beam

    NASA Astrophysics Data System (ADS)

    Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón

    2018-07-01

    The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.

  19. Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.

    PubMed

    Guo, Lian; Radisic, Aleksandar; Searson, Peter C

    2005-12-22

    Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.

  20. Cell survival fraction estimation based on the probability densities of domain and cell nucleus specific energies using improved microdosimetric kinetic models.

    PubMed

    Sato, Tatsuhiko; Furusawa, Yoshiya

    2012-10-01

    Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.

  1. A computationally efficient ductile damage model accounting for nucleation and micro-inertia at high triaxialities

    DOE PAGES

    Versino, Daniele; Bronkhorst, Curt Allan

    2018-01-31

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  2. Nonocclusal dental microwear analysis of 300,000-year-old Homo heilderbergensis teeth from Sima de los Huesos (Sierra de Atapuerca, Spain).

    PubMed

    Pérez-Pérez, A; Bermúdez De Castro, J M; Arsuaga, J L

    1999-04-01

    Casts of nonocclusal enamel surfaces of 190 teeth from the Middle Pleistocene site of Sima de los Huesos have been micrographed by scanning electron microscopy. Microscopic analyses of striation density and length by orientation show distinct patterns of intrapopulation variability. Significant differences in the number and length of the striations by orientation are found between maxillary and mandibular teeth. This probably reflects differences in the mechanical forces involved in the process of chewing food. Significant differences are present between isolated and in situ teeth that could be caused by postdepositional processes differentially affecting the isolated teeth. In addition, a distinct and very unusual striation pattern is observed in a sample of teeth that can be explained only by a strong nondietary, most probably postmortem abrasion of the enamel surfaces. These teeth have a very high density of scratches, shorter in length than those found on other teeth, that are not indicative of dietary habits. No known depositional process may account for the presence of such postmortem wear since heavy transportation of materials within the clayish sediments has been discarded for the site. Despite this, a characteristic dietary striation pattern can be observed in most of the teeth analyzed. Most likely the diet of the Homo heidelbergensis hominids from Sima de los Huesos was highly abrasive, probably with a large dependence on hard, poorly processed plant foods, such as roots, stems, and seeds. A highly significant sex-related difference in the striation pattern can also be observed in the teeth analyzed, suggesting a differential consistency in the foods eaten by females and males.

  3. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  4. Forcing of the Coupled Ionosphere-Thermosphere (IT) System During Magnetic Storms

    NASA Technical Reports Server (NTRS)

    Huang, Cheryl; Huang, Yanshi; Su, Yi-Jiun; Sutton, Eric; Hairston, Marc; Coley, W. Robin; Doornbos, Eelco; Zhang, Yongliang

    2014-01-01

    Poynting flux shows peaks around auroral zone AND inside polar cap. Energy enters IT system at all local times in polar cap. Track-integrated flux at DMSP often peaks at polar latitudes- probably due to increased area of polar cap during storm main phases. center dot lon temperatures at DMSP show large increases in polar region at all local times; cusp and auroral zones do not show distinctively high Ti. center dot I on temperatures in the polar cap are higher than in the auroral zones during quiet times. center dot Neutral densities at GRACE and GOCE show maxima at polar latitudes without clear auroral signatures. Response is fast, minutes from onset to density peaks. center dot GUVI observations of O/N2 ratio during storms show similar response as direct measurements of ion and neutral densities, i.e. high temperatures in polar cap during prestorm quiet period, heating proceeding from polar cap to lower latitudes during storm main phase. center dot Discrepancy between maps of Poynting flux and of ion temperatures/neutral densities suggests that connection between Poynting flux and Joule heating is not simple.

  5. Oak regeneration and overstory density in the Missouri Ozarks

    Treesearch

    David R. Larsen; Monte A. Metzger

    1997-01-01

    Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...

  6. Reconstructing the deadly eruptive events of 1790 CE at Kīlauea Volcano, Hawai‘i

    USGS Publications Warehouse

    Swanson, Don; Weaver, Samantha J; Houghton, Bruce F.

    2014-01-01

    A large number of people died during an explosive eruption of Kīlauea Volcano in 1790 CE. Detailed study of the upper part of the Keanakāko‘i Tephra has identified the deposits that may have been responsible for the deaths. Three successive units record shifts in eruption style that agree well with accounts of the eruption based on survivor interviews 46 yr later. First, a wet fall of very fine, accretionary-lapilli–bearing ash created a “cloud of darkness.” People walked across the soft deposit, leaving footprints as evidence. While the ash was still unconsolidated, lithic lapilli fell into it from a high eruption column that was seen from 90 km away. Either just after this tephra fall or during its latest stage, pulsing dilute pyroclastic density currents, probably products of a phreatic eruption, swept across the western flank of Kīlauea, embedding lapilli in the muddy ash and crossing the trail along which the footprints occur. The pyroclastic density currents were most likely responsible for the fatalities, as judged from the reported condition and probable location of the bodies. This reconstruction is relevant today, as similar eruptions will probably occur in the future at Kīlauea and represent its most dangerous and least predictable hazard.

  7. Topology of two-dimensional turbulent flows of dust and gas

    NASA Astrophysics Data System (ADS)

    Mitra, Dhrubaditya; Perlekar, Prasad

    2018-04-01

    We perform direct numerical simulations (DNS) of passive heavy inertial particles (dust) in homogeneous and isotropic two-dimensional turbulent flows (gas) for a range of Stokes number, St<1 . We solve for the particles using both a Lagrangian and an Eulerian approach (with a shock-capturing scheme). In the latter, the particles are described by a dust-density field and a dust-velocity field. We find the following: the dust-density field in our Eulerian simulations has the same correlation dimension d2 as obtained from the clustering of particles in the Lagrangian simulations for St<1 ; the cumulative probability distribution function of the dust density coarse grained over a scale r , in the inertial range, has a left tail with a power-law falloff indicating the presence of voids; the energy spectrum of the dust velocity has a power-law range with an exponent that is the same as the gas-velocity spectrum except at very high Fourier modes; the compressibility of the dust-velocity field is proportional to St2. We quantify the topological properties of the dust velocity and the gas velocity through their gradient matrices, called A and B , respectively. Our DNS confirms that the statistics of topological properties of B are the same in Eulerian and Lagrangian frames only if the Eulerian data are weighed by the dust density. We use this correspondence to study the statistics of topological properties of A in the Lagrangian frame from our Eulerian simulations by calculating density-weighted probability distribution functions. We further find that in the Lagrangian frame, the mean value of the trace of A is negative and its magnitude increases with St approximately as exp(-C /St) with a constant C ≈0.1 . The statistical distribution of different topological structures that appear in the dust flow is different in Eulerian and Lagrangian (density-weighted Eulerian) cases, particularly for St close to unity. In both of these cases, for small St the topological structures have close to zero divergence and are either vortical (elliptic) or strain dominated (hyperbolic, saddle). As St increases, the contribution to negative divergence comes mostly from saddles and the contribution to positive divergence comes from both vortices and saddles. Compared to the Eulerian case, the Lagrangian (density-weighted Eulerian) case has less outward spirals and more converging saddles. Inward spirals are the least probable topological structures in both cases.

  8. Quantifying volcanic hazard at Campi Flegrei caldera (Italy) with uncertainty assessment: 2. Pyroclastic density current invasion maps

    NASA Astrophysics Data System (ADS)

    Neri, Augusto; Bevilacqua, Andrea; Esposti Ongaro, Tomaso; Isaia, Roberto; Aspinall, Willy P.; Bisson, Marina; Flandoli, Franco; Baxter, Peter J.; Bertagnini, Antonella; Iannuzzi, Enrico; Orsucci, Simone; Pistolesi, Marco; Rosi, Mauro; Vitale, Stefano

    2015-04-01

    Campi Flegrei (CF) is an example of an active caldera containing densely populated settlements at very high risk of pyroclastic density currents (PDCs). We present here an innovative method for assessing background spatial PDC hazard in a caldera setting with probabilistic invasion maps conditional on the occurrence of an explosive event. The method encompasses the probabilistic assessment of potential vent opening positions, derived in the companion paper, combined with inferences about the spatial density distribution of PDC invasion areas from a simplified flow model, informed by reconstruction of deposits from eruptions in the last 15 ka. The flow model describes the PDC kinematics and accounts for main effects of topography on flow propagation. Structured expert elicitation is used to incorporate certain sources of epistemic uncertainty, and a Monte Carlo approach is adopted to produce a set of probabilistic hazard maps for the whole CF area. Our findings show that, in case of eruption, almost the entire caldera is exposed to invasion with a mean probability of at least 5%, with peaks greater than 50% in some central areas. Some areas outside the caldera are also exposed to this danger, with mean probabilities of invasion of the order of 5-10%. Our analysis suggests that these probability estimates have location-specific uncertainties which can be substantial. The results prove to be robust with respect to alternative elicitation models and allow the influence on hazard mapping of different sources of uncertainty, and of theoretical and numerical assumptions, to be quantified.

  9. Aging ballistic Lévy walks

    NASA Astrophysics Data System (ADS)

    Magdziarz, Marcin; Zorawik, Tomasz

    2017-02-01

    Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .

  10. Extracting the distribution of laser damage precursors on fused silica surfaces for 351 nm, 3 ns laser pulses at high fluences (20-150 J/cm2).

    PubMed

    Laurence, Ted A; Bude, Jeff D; Ly, Sonny; Shen, Nan; Feit, Michael D

    2012-05-07

    Surface laser damage limits the lifetime of optics for systems guiding high fluence pulses, particularly damage in silica optics used for inertial confinement fusion-class lasers (nanosecond-scale high energy pulses at 355 nm/3.5 eV). The density of damage precursors at low fluence has been measured using large beams (1-3 cm); higher fluences cannot be measured easily since the high density of resulting damage initiation sites results in clustering. We developed automated experiments and analysis that allow us to damage test thousands of sites with small beams (10-30 µm), and automatically image the test sites to determine if laser damage occurred. We developed an analysis method that provides a rigorous connection between these small beam damage test results of damage probability versus laser pulse energy and the large beam damage results of damage precursor densities versus fluence. We find that for uncoated and coated fused silica samples, the distribution of precursors nearly flattens at very high fluences, up to 150 J/cm2, providing important constraints on the physical distribution and nature of these precursors.

  11. Benchmarks for detecting 'breakthroughs' in clinical trials: empirical assessment of the probability of large treatment effects using kernel density estimation.

    PubMed

    Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin

    2014-10-21

    To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Precipitation Cluster Distributions: Current Climate Storm Statistics and Projected Changes Under Global Warming

    NASA Astrophysics Data System (ADS)

    Quinn, Kevin Martin

    The total amount of precipitation integrated across a precipitation cluster (contiguous precipitating grid cells exceeding a minimum rain rate) is a useful measure of the aggregate size of the disturbance, expressed as the rate of water mass lost or latent heat released, i.e. the power of the disturbance. Probability distributions of cluster power are examined during boreal summer (May-September) and winter (January-March) using satellite-retrieved rain rates from the Tropical Rainfall Measuring Mission (TRMM) 3B42 and Special Sensor Microwave Imager and Sounder (SSM/I and SSMIS) programs, model output from the High Resolution Atmospheric Model (HIRAM, roughly 0.25-0.5 0 resolution), seven 1-2° resolution members of the Coupled Model Intercomparison Project Phase 5 (CMIP5) experiment, and National Center for Atmospheric Research Large Ensemble (NCAR LENS). Spatial distributions of precipitation-weighted centroids are also investigated in observations (TRMM-3B42) and climate models during winter as a metric for changes in mid-latitude storm tracks. Observed probability distributions for both seasons are scale-free from the smallest clusters up to a cutoff scale at high cluster power, after which the probability density drops rapidly. When low rain rates are excluded by choosing a minimum rain rate threshold in defining clusters, the models accurately reproduce observed cluster power statistics and winter storm tracks. Changes in behavior in the tail of the distribution, above the cutoff, are important for impacts since these quantify the frequency of the most powerful storms. End-of-century cluster power distributions and storm track locations are investigated in these models under a "business as usual" global warming scenario. The probability of high cluster power events increases by end-of-century across all models, by up to an order of magnitude for the highest-power events for which statistics can be computed. For the three models in the suite with continuous time series of high resolution output, there is substantial variability on when these probability increases for the most powerful precipitation clusters become detectable, ranging from detectable within the observational period to statistically significant trends emerging only after 2050. A similar analysis of National Centers for Environmental Prediction (NCEP) Reanalysis 2 and SSM/I-SSMIS rain rate retrievals in the recent observational record does not yield reliable evidence of trends in high-power cluster probabilities at this time. Large impacts to mid-latitude storm tracks are projected over the West Coast and eastern North America, with no less than 8 of the 9 models examined showing large increases by end-of-century in the probability density of the most powerful storms, ranging up to a factor of 6.5 in the highest range bin for which historical statistics are computed. However, within these regional domains, there is considerable variation among models in pinpointing exactly where the largest increases will occur.

  13. Ceres and the terrestrial planets impact cratering record

    NASA Astrophysics Data System (ADS)

    Strom, R. G.; Marchi, S.; Malhotra, R.

    2018-03-01

    Dwarf planet Ceres, the largest object in the Main Asteroid Belt, has a surface that exhibits a range of crater densities for a crater diameter range of 5-300 km. In all areas the shape of the craters' size-frequency distribution is very similar to those of the most ancient heavily cratered surfaces on the terrestrial planets. The most heavily cratered terrain on Ceres covers ∼15% of its surface and has a crater density similar to the highest crater density on <1% of the lunar highlands. This region of higher crater density on Ceres probably records the high impact rate at early times and indicates that the other 85% of Ceres was partly resurfaced after the Late Heavy Bombardment (LHB) at ∼4 Ga. The Ceres cratering record strongly indicates that the period of Late Heavy Bombardment originated from an impactor population whose size-frequency distribution resembles that of the Main Belt Asteroids.

  14. Intermittent turbulence and turbulent structures in LAPD and ET

    NASA Astrophysics Data System (ADS)

    Carter, T. A.; Pace, D. C.; White, A. E.; Gauvreau, J.-L.; Gourdain, P.-A.; Schmitz, L.; Taylor, R. J.

    2006-12-01

    Strongly intermittent turbulence is observed in the shadow of a limiter in the Large Plasma Device (LAPD) and in both the inboard and outboard scrape-off-layer (SOL) in the Electric Tokamak (ET) at UCLA. In LAPD, the amplitude probability distribution function (PDF) of the turbulence is strongly skewed, with density depletion events (or "holes") dominant in the high density region and density enhancement events (or "blobs") dominant in the low density region. Two-dimensional cross-conditional averaging shows that the blobs are detached, outward-propagating filamentary structures with a clear dipolar potential while the holes appear to be part of a more extended turbulent structure. A statistical study of the blobs reveals a typical size of ten times the ion sound gyroradius and a typical velocity of one tenth the sound speed. In ET, intermittent turbulence is observed on both the inboard and outboard midplane.

  15. On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21

    NASA Technical Reports Server (NTRS)

    Aalfs, David D.

    1995-01-01

    For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.

  16. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  17. The non-Gaussian joint probability density function of slope and elevation for a nonlinear gravity wave field. [in ocean surface

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.

    1984-01-01

    On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.

  18. Effect of Non-speckle Echo Signals on Tissue Characteristics for Liver Fibrosis using Probability Density Function of Ultrasonic B-mode image

    NASA Astrophysics Data System (ADS)

    Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki

    To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.

  19. Novel Flexible Plastic-Based Solar Cells

    DTIC Science & Technology

    2009-11-30

    the high mobility of charge carriers in pentacene probably due to conducting domains provided by it. 2. Multi-Exciton Generation (MEG) in Devices...with simulating the model including recombination rate, trap density and trapped charge induced electric field. £ < £ O 0.2 0.3 0.4...to charge extraction and transport in hybrid nanoparticle:polymer photovoltaic devices. In particular, we demonstrated (i) enhancement of charge

  20. Laboratory-Tutorial Activities for Teaching Probability

    ERIC Educational Resources Information Center

    Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.

    2006-01-01

    We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…

  1. Derivation of an eigenvalue probability density function relating to the Poincaré disk

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Krishnapur, Manjunath

    2009-09-01

    A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.

  2. A very efficient approach to compute the first-passage probability density function in a time-changed Brownian model: Applications in finance

    NASA Astrophysics Data System (ADS)

    Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide

    2016-12-01

    We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.

  3. Committor of elementary reactions on multistate systems

    NASA Astrophysics Data System (ADS)

    Király, Péter; Kiss, Dóra Judit; Tóth, Gergely

    2018-04-01

    In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.

  4. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  5. Using areas of known occupancy to identify sources of variation in detection probability of raptors: taking time lowers replication effort for surveys.

    PubMed

    Murn, Campbell; Holloway, Graham J

    2016-10-01

    Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.

  6. IGM CONSTRAINTS FROM THE SDSS-III/BOSS DR9 Lyα FOREST TRANSMISSION PROBABILITY DISTRIBUTION FUNCTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Khee-Gan; Hennawi, Joseph F.; Spergel, David N.

    2015-02-01

    The Lyα forest transmission probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the transmission PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from Sloan Digital Sky Survey Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS transmission PDFs, measured at (z) = [2.3, 2.6, 3.0], are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, γ, and temperature at mean density,more » T {sub 0}, where T(Δ) = T {sub 0}Δ{sup γ} {sup –} {sup 1}. We find that a significant population of partial Lyman-limit systems (LLSs) with a column-density distribution slope of β{sub pLLS} ∼ – 2 are required to explain the data at the low-transmission end of transmission PDF, while uncertainties in the mean Lyα forest transmission affect the high-transmission end. After modeling the LLSs and marginalizing over mean transmission uncertainties, we find that γ = 1.6 best describes the data over our entire redshift range, although constraints on T {sub 0} are affected by systematic uncertainties. Within our model framework, isothermal or inverted temperature-density relationships (γ ≤ 1) are disfavored at a significance of over 4σ, although this could be somewhat weakened by cosmological and astrophysical uncertainties that we did not model.« less

  7. Multiple Streaming and the Probability Distribution of Density in Redshift Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui, Lam; Kofman, Lev; Shandarin, Sergei F.

    2000-07-01

    We examine several aspects of redshift distortions by expressing the redshift-space density in terms of the eigenvalues and orientation of the local Lagrangian deformation tensor. We explore the importance of multiple streaming using the Zeldovich approximation (ZA), and compute the average number of streams in both real and redshift space. We find that multiple streaming can be significant in redshift space but negligible in real space, even at moderate values of the linear fluctuation amplitude ({sigma}{sub l}(less-or-similar sign)1). Moreover, unlike their real-space counterparts, redshift-space multiple streams can flow past each other with minimal interactions. Such nonlinear redshift-space effects, which aremore » physically distinct from the fingers-of-God due to small-scale virialized motions, might in part explain the well-known departure of redshift distortions from the classic linear prediction by Kaiser, even at relatively large scales where the corresponding density field in real space is well described by linear perturbation theory. We also compute, using the ZA, the probability distribution function (PDF) of the density, as well as S{sub 3}, in real and redshift space, and compare it with the PDF measured from N-body simulations. The role of caustics in defining the character of the high-density tail is examined. We find that (non-Lagrangian) smoothing, due to both finite resolution or discreteness and small-scale velocity dispersions, is very effective in erasing caustic structures, unless the initial power spectrum is sufficiently truncated. (c) 2000 The American Astronomical Society.« less

  8. How many tigers Panthera tigris are there in Huai Kha Khaeng Wildlife Sanctuary, Thailand? An estimate using photographic capture-recapture sampling

    USGS Publications Warehouse

    Simcharoen, S.; Pattanavibool, A.; Karanth, K.U.; Nichols, J.D.; Kumar, N.S.

    2007-01-01

    We used capture-recapture analyses to estimate the density of a tiger Panthera tigris population in the tropical forests of Huai Kha Khaeng Wildlife Sanctuary, Thailand, from photographic capture histories of 15 distinct individuals. The closure test results (z = 0.39, P = 0.65) provided some evidence in support of the demographic closure assumption. Fit of eight plausible closed models to the data indicated more support for model Mh, which incorporates individual heterogeneity in capture probabilities. This model generated an average capture probability $\\hat p$ = 0.42 and an abundance estimate of $\\widehat{N}(\\widehat{SE}[\\widehat{N}])$ = 19 (9.65) tigers. The sampled area of $\\widehat{A}(W)(\\widehat{SE}[\\widehat{A}(W)])$ = 477.2 (58.24) km2 yielded a density estimate of $\\widehat{D}(\\widehat{SE}[\\widehat{D}])$ = 3.98 (0.51) tigers per 100 km2. Huai Kha Khaeng Wildlife Sanctuary could therefore hold 113 tigers and the entire Western Forest Complex c. 720 tigers. Although based on field protocols that constrained us to use sub-optimal analyses, this estimated tiger density is comparable to tiger densities in Indian reserves that support moderate prey abundances. However, tiger densities in well-protected Indian reserves with high prey abundances are three times higher. If given adequate protection we believe that the Western Forest Complex of Thailand could potentially harbour >2,000 wild tigers, highlighting its importance for global tiger conservation. The monitoring approaches we recommend here would be useful for managing this tiger population.

  9. Uncertainty quantification in LES of channel flow

    DOE PAGES

    Safta, Cosmin; Blaylock, Myra; Templeton, Jeremy; ...

    2016-07-12

    Here, in this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence andmore » are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for.« less

  10. Optimal estimation for the satellite attitude using star tracker measurements

    NASA Technical Reports Server (NTRS)

    Lo, J. T.-H.

    1986-01-01

    An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.

  11. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  12. The Havriliak-Negami relaxation and its relatives: the response, relaxation and probability density functions

    NASA Astrophysics Data System (ADS)

    Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.

    2018-04-01

    We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.

  13. Protein-Protein Interaction Site Predictions with Three-Dimensional Probability Distributions of Interacting Atoms on Protein Surfaces

    PubMed Central

    Chen, Ching-Tai; Peng, Hung-Pin; Jian, Jhih-Wei; Tsai, Keng-Chang; Chang, Jeng-Yih; Yang, Ei-Wen; Chen, Jun-Bo; Ho, Shinn-Ying; Hsu, Wen-Lian; Yang, An-Suei

    2012-01-01

    Protein-protein interactions are key to many biological processes. Computational methodologies devised to predict protein-protein interaction (PPI) sites on protein surfaces are important tools in providing insights into the biological functions of proteins and in developing therapeutics targeting the protein-protein interaction sites. One of the general features of PPI sites is that the core regions from the two interacting protein surfaces are complementary to each other, similar to the interior of proteins in packing density and in the physicochemical nature of the amino acid composition. In this work, we simulated the physicochemical complementarities by constructing three-dimensional probability density maps of non-covalent interacting atoms on the protein surfaces. The interacting probabilities were derived from the interior of known structures. Machine learning algorithms were applied to learn the characteristic patterns of the probability density maps specific to the PPI sites. The trained predictors for PPI sites were cross-validated with the training cases (consisting of 432 proteins) and were tested on an independent dataset (consisting of 142 proteins). The residue-based Matthews correlation coefficient for the independent test set was 0.423; the accuracy, precision, sensitivity, specificity were 0.753, 0.519, 0.677, and 0.779 respectively. The benchmark results indicate that the optimized machine learning models are among the best predictors in identifying PPI sites on protein surfaces. In particular, the PPI site prediction accuracy increases with increasing size of the PPI site and with increasing hydrophobicity in amino acid composition of the PPI interface; the core interface regions are more likely to be recognized with high prediction confidence. The results indicate that the physicochemical complementarity patterns on protein surfaces are important determinants in PPIs, and a substantial portion of the PPI sites can be predicted correctly with the physicochemical complementarity features based on the non-covalent interaction data derived from protein interiors. PMID:22701576

  14. Effects of carbon dioxide, Nd:YAG and carbon dioxide-Nd:YAG combination lasers at high energy densities on synthetic hydroxyaptite.

    PubMed

    Meurman, J H; Voegel, J C; Rauhamaa-Mäkinen, R; Gasser, P; Thomann, J M; Hemmerle, J; Luomanen, M; Paunio, I; Frank, R M

    1992-01-01

    The aim of this study was to determine the crystalline structure and chemical alterations of synthetic hydroxyapatite after irradiation with either CO2, Nd:YAG or CO2-Nd:YAG combination lasers at high energy densities of 500-3,230 J.cm2. Further, dissolution kinetics of the lased material were analysed and compared with those of unlased apatite. Electron microscopy showed that the lased material consisted of two kinds of crystals. From the micrographs their diameters varied from 600 to 1,200 A and from 3,000 to 6,000 A, respectively. The larger crystals showed 6.9-Angström periodic lattice fringes in the transmission electron microscope. alpha-Tricalcium phosphate (TCP) was identified by X-ray diffraction. Selective-area electron diffraction identified the large crystals to consist of tricalcium phosphate while the smaller crystals were probably hydroxyapatite. Assays of dissolution kinetics showed that at these high energy densities lased material dissolved more rapidly than unlased synthetic hydroxyapatite due to the higher solubility of TCP.

  15. Carbonic fluid inclusions in amphibolite-facies pelitic schists from Bodonch area, western Mongolian Altai

    NASA Astrophysics Data System (ADS)

    Zorigtkhuu, Oyun-Erdene; Tsunogae, Toshiaki; Dash, Batulzii

    We report first fluid inclusion data on amphibolite-facies pelitic schists from Bodonch area of western Mongolian Altai in the Central Asian Orogenic Belt. Three categories of fluid inclusions have been observed in quartz: dominant primary and secondary inclusions, and least dominant pseudosecondary inclusions. The melting temperatures of all the categories of inclusions lie in the narrow range of -57.5 °C to -56.6 °C, close to the triple point of pure CO2. Homogenization of fluids occurs into liquid phase at temperature between -33.3 °C to +19.4 °C, which convert into densities in the range of 0.78 g/cm3 to 1.09 g/cm3. The estimated CO2 isochores for primary and pseudosecondary high-density inclusions is broadly consistent with the peak metamorphic condition of the studied area (6.3-7.3 kbar at 655 °C). The results of this study, together with the primary and pseudosecondary nature of the inclusions, indicate CO2 was the dominant fluid component during the peak amphibolite-facies metamorphism of the study area. The examined quartz grains are texturally associated with biotite, kyanite and staurolite, which are regarded as high-grade minerals formed during prograde to peak metamorphism. Therefore quartz probably formed by high-grade metamorphism and the primary fluid inclusions trapped in the minerals probably preserve fluids at around peak metamorphism.

  16. Non-Linear Interactions between Consumers and Flow Determine the Probability of Plant Community Dominance on Maine Rocky Shores

    PubMed Central

    Silliman, Brian R.; McCoy, Michael W.; Trussell, Geoffrey C.; Crain, Caitlin M.; Ewanchuk, Patrick J.; Bertness, Mark D.

    2013-01-01

    Although consumers can strongly influence community recovery from disturbance, few studies have explored the effects of consumer identity and density and how they may vary across abiotic gradients. On rocky shores in Maine, recent experiments suggest that recovery of plant- or animal- dominated community states is governed by rates of water movement and consumer pressure. To further elucidate the mechanisms of consumer control, we examined the species-specific and density-dependent effects of rocky shore consumers (crabs and snails) on community recovery under both high (mussel dominated) and low flow (plant dominated) conditions. By partitioning the direct impacts of predators (crabs) and grazers (snails) on community recovery across a flow gradient, we found that grazers, but not predators, are likely the primary agent of consumer control and that their impact is highly non-linear. Manipulating snail densities revealed that herbivorous and bull-dozing snails (Littorina littorea) alone can control recovery of high and low flow communities. After ∼1.5 years of recovery, snail density explained a significant amount of the variation in macroalgal coverage at low flow sites and also mussel recovery at high flow sites. These density-dependent grazer effects were were both non-linear and flow-dependent, with low abundance thresholds needed to suppress plant community recovery, and much higher levels needed to control mussel bed development. Our study suggests that consumer density and identity are key in regulating both plant and animal community recovery and that physical conditions can determine the functional forms of these consumer effects. PMID:23940510

  17. Determination of the mass of globular cluster X-ray sources

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Hertz, P.; Steiner, J. E.; Murray, S. S.; Lightman, A. P.

    1984-01-01

    The precise positions of the luminous X-ray sources in eight globular clusters have been measured with the Einstein X-Ray Observatory. When combined with similarly precise measurements of the dynamical centers and core radii of the globular clusters, the distribution of the X-ray source mass is determined to be in the range 0.9-1.9 solar mass. The X-ray source positions and the detailed optical studies indicate that (1) the sources are probably all of similar mass, (2) the gravitational potentials in these high-central density clusters are relatively smooth and isothermal, and (3) the X-ray sources are compact binaries and are probably formed by tidal capture.

  18. Attributes of seasonal home range influence choice of migratory strategy in white-tailed deer

    USGS Publications Warehouse

    Henderson, Charles R.; Mitchell, Michael S.; Myers, Woodrow L.; Lukacs, Paul M.; Nelson, Gerald P.

    2018-01-01

    Partial migration is a common life-history strategy among ungulates living in seasonal environments. The decision to migrate or remain on a seasonal range may be influenced strongly by access to high-quality habitat. We evaluated the influence of access to winter habitat of high quality on the probability of a female white-tailed deer (Odocoileus virginianus) migrating to a separate summer range and the effects of this decision on survival. We hypothesized that deer with home ranges of low quality in winter would have a high probability of migrating, and that survival of an individual in winter would be influenced by the quality of their home range in winter. We radiocollared 67 female white-tailed deer in 2012 and 2013 in eastern Washington, United States. We estimated home range size in winter using a kernel density estimator; we assumed the size of the home range was inversely proportional to its quality and the proportion of crop land within the home range was proportional to its quality. Odds of migrating from winter ranges increased by 3.1 per unit increase in home range size and decreased by 0.29 per unit increase in the proportion of crop land within a home range. Annual survival rate for migrants was 0.85 (SD = 0.05) and 0.84 (SD = 0.09) for residents. Our finding that an individual with a low-quality home range in winter is likely to migrate to a separate summer range accords with the hypothesis that competition for a limited amount of home ranges of high quality should result in residents having home ranges of higher quality than migrants in populations experiencing density dependence. We hypothesize that density-dependent competition for high-quality home ranges in winter may play a leading role in the selection of migration strategy by female white-tailed deer.

  19. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  20. Pleurochrysis pseudoroscoffensis (Prymnesiophyceae) blooms on the surface of the Salton Sea, California

    USGS Publications Warehouse

    Reifel, K.M.; McCoy, M.P.; Tiffany, M.A.; Rocke, T.E.; Trees, C.C.; Barlow, S.B.; Faulkner, D.J.; Hurlbert, S.H.

    2001-01-01

    Dense populations of the coccolithophore Pleurochrysis pseudoroscoffensis were found in surface films at several locations around the Salton Sea in Februarya??August, 1999. An unidentified coccolithophorid was also found in low densities in earlier studies of the lake (1955a??1956). To our knowledge, this is the first record of this widespread marine species in any lake. Samples taken from surface films typically contained high densities of one or two other phytoplankton species as well as high densities of the coccolithophore. Presence or absence of specific algal pigments was used to validate direct cell counts. In a preliminary screen using a brine shrimp lethality assay, samples showed moderate activity. Extracts were then submitted to a mouse bioassay, and no toxic activity was observed. These results indicate that blooms of P. pseudoroscoffensis are probably not toxic to vertebrates and do not contribute to the various mortality events of birds and fish that occur in the Salton Sea.

  1. An improved probabilistic approach for linking progenitor and descendant galaxy populations using comoving number density

    NASA Astrophysics Data System (ADS)

    Wellons, Sarah; Torrey, Paul

    2017-06-01

    Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.

  2. Radiative transition of hydrogen-like ions in quantum plasma

    NASA Astrophysics Data System (ADS)

    Hu, Hongwei; Chen, Zhanbin; Chen, Wencong

    2016-12-01

    At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.

  3. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  4. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  5. Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems

    NASA Technical Reports Server (NTRS)

    Holmes, J. K.; Woo, K. T.

    1978-01-01

    The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.

  6. RADC Multi-Dimensional Signal-Processing Research Program.

    DTIC Science & Technology

    1980-09-30

    Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image

  7. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Probability density function of non-reactive solute concentration in heterogeneous porous formations

    Treesearch

    Alberto Bellin; Daniele Tonina

    2007-01-01

    Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...

  9. Probability modeling of high flow extremes in Yingluoxia watershed, the upper reaches of Heihe River basin

    NASA Astrophysics Data System (ADS)

    Li, Zhanling; Li, Zhanjie; Li, Chengcheng

    2014-05-01

    Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to 2008, while the intensity of such flow extremes is comparatively increasing especially for the higher return levels.

  10. Using Prediction Markets to Generate Probability Density Functions for Climate Change Risk Assessment

    NASA Astrophysics Data System (ADS)

    Boslough, M.

    2011-12-01

    Climate-related uncertainty is traditionally presented as an error bar, but it is becoming increasingly common to express it in terms of a probability density function (PDF). PDFs are a necessary component of probabilistic risk assessments, for which simple "best estimate" values are insufficient. Many groups have generated PDFs for climate sensitivity using a variety of methods. These PDFs are broadly consistent, but vary significantly in their details. One axiom of the verification and validation community is, "codes don't make predictions, people make predictions." This is a statement of the fact that subject domain experts generate results using assumptions within a range of epistemic uncertainty and interpret them according to their expert opinion. Different experts with different methods will arrive at different PDFs. For effective decision support, a single consensus PDF would be useful. We suggest that market methods can be used to aggregate an ensemble of opinions into a single distribution that expresses the consensus. Prediction markets have been shown to be highly successful at forecasting the outcome of events ranging from elections to box office returns. In prediction markets, traders can take a position on whether some future event will or will not occur. These positions are expressed as contracts that are traded in a double-action market that aggregates price, which can be interpreted as a consensus probability that the event will take place. Since climate sensitivity cannot directly be measured, it cannot be predicted. However, the changes in global mean surface temperature are a direct consequence of climate sensitivity, changes in forcing, and internal variability. Viable prediction markets require an undisputed event outcome on a specific date. Climate-related markets exist on Intrade.com, an online trading exchange. One such contract is titled "Global Temperature Anomaly for Dec 2011 to be greater than 0.65 Degrees C." Settlement is based global temperature anomaly data published by NASS GISS. Typical climate contracts predict the probability of a specified future temperature, but not the probability density or best estimate. One way to generate a probability distribution would be to create a family of contracts over a range of specified temperatures and interpret the price of each contract as its exceedance probability. The resulting plot of probability vs. anomaly is the market-based cumulative density function. The best estimate can be determined by interpolation, and the market-based uncertainty estimate can be based on the spread. One requirement for an effective prediction market is liquidity. Climate contracts are currently considered somewhat of a novelty and often lack sufficient liquidity, but climate change has the potential to generate both tremendous losses for some (e.g. agricultural collapse and extreme weather events) and wealth for others (access to natural resources and trading routes). Use of climate markets by large stakeholders has the potential to generate the liquidity necessary to make them viable. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DoE's NNSA under contract DE-AC04-94AL85000.

  11. Using occupancy models to investigate the prevalence of ectoparasitic vectors on hosts: an example with fleas on prairie dogs

    USGS Publications Warehouse

    Eads, David A.; Biggins, Dean E.; Doherty, Paul F.; Gage, Kenneth L.; Huyvaert, Kathryn P.; Long, Dustin H.; Antolin, Michael F.

    2013-01-01

    Ectoparasites are often difficult to detect in the field. We developed a method that can be used with occupancy models to estimate the prevalence of ectoparasites on hosts, and to investigate factors that influence rates of ectoparasite occupancy while accounting for imperfect detection. We describe the approach using a study of fleas (Siphonaptera) on black-tailed prairie dogs (Cynomys ludovicianus). During each primary occasion (monthly trapping events), we combed a prairie dog three consecutive times to detect fleas (15 s/combing). We used robust design occupancy modeling to evaluate hypotheses for factors that might correlate with the occurrence of fleas on prairie dogs, and factors that might influence the rate at which prairie dogs are colonized by fleas. Our combing method was highly effective; dislodged fleas fell into a tub of water and could not escape, and there was an estimated 99.3% probability of detecting a flea on an occupied host when using three combings. While overall detection was high, the probability of detection was always <1.00 during each primary combing occasion, highlighting the importance of considering imperfect detection. The combing method (removal of fleas) caused a decline in detection during primary occasions, and we accounted for that decline to avoid inflated estimates of occupancy. Regarding prairie dogs, flea occupancy was heightened in old/natural colonies of prairie dogs, and on hosts that were in poor condition. Occupancy was initially low in plots with high densities of prairie dogs, but, as the study progressed, the rate of flea colonization increased in plots with high densities of prairie dogs in particular. Our methodology can be used to improve studies of ectoparasites, especially when the probability of detection is low. Moreover, the method can be modified to investigate the co-occurrence of ectoparasite species, and community level factors such as species richness and interspecific interactions.

  12. On-line prognosis of fatigue crack propagation based on Gaussian weight-mixture proposal particle filter.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo

    2018-01-01

    Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Cellular automata models for diffusion of information and highway traffic flow

    NASA Astrophysics Data System (ADS)

    Fuks, Henryk

    In the first part of this work we study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parameterized by the speed limit m and another parameter k that represents degree of 'anticipatory driving'. We compare two driving strategies with identical maximum throughput: 'conservative' driving with high speed limit and 'anticipatory' driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered. For rule 184, we present exact calculations of the order parameter in a transition from the moving phase to the jammed phase using the method of preimage counting, and use this result to construct a solution to the density classification problem. In the second part we propose a probabilistic cellular automaton model for the spread of innovations, rumors, news, etc., in a social system. We start from simple deterministic models, for which exact expressions for the density of adopters are derived. For a more realistic model, based on probabilistic cellular automata, we study the influence of a range of interaction R on the shape of the adoption curve. When the probability of adoption is proportional to the local density of adopters, and individuals can drop the innovation with some probability p, the system exhibits a second order phase transition. Critical line separating regions of parameter space in which asymptotic density of adopters is positive from the region where it is equal to zero converges toward the mean-field line when the range of the interaction increases. In a region between R=1 critical line and the mean-field line asymptotic density of adopters depends on R, becoming zero if R is too small (smaller than some critical value). This result demonstrates the importance of connectivity in diffusion of information. We also define a new class of automata networks which incorporates non-local interactions, and discuss its applicability in modeling of diffusion of innovations.

  14. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  15. Analysis of Muon Induced Neutrons in Detecting High Z Nuclear Materials

    DTIC Science & Technology

    2015-03-01

    mass distributions, delayed fission probabilities, and prompt to delayed fission ratios [16]. 10 2.3 Muon Catalyzed Fusion Fusion occurs when two light ...proton number; A is the atomic mass; ⇢ is the material density; = v/c where v is the velocity of the particle and c is the speed of light ; is the...8217) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 81 % Combine all neutron events time stamps into one vector %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% timeindex of

  16. New Concept Study for Repair of Bomb-Damaged Runways. Volume I. Concept Identification.

    DTIC Science & Technology

    1979-09-01

    Expanded polystyrene beads would be pneumatically mixed with the cement to form a low density material. Initially, the ratio of foam to cement would...the combinations are presented with this concept. PRIMARY MATERIALS 0 Expanded polystyrene foam beads * Graded aggregate * Quick setting cement 61 E-4...probability of success - high ALTERNATE MATERIALS * Expanded polystyrene foam beads * Organic binders Furan Methyl Methacrylate Epoxy Aminos * Graded

  17. Is There a Risk of Yellow Fever Virus Transmission in South Asian Countries with Hyperendemic Dengue?

    PubMed Central

    Agampodi, Suneth B.; Wickramage, Kolitha

    2013-01-01

    The fact that yellow fever (YF) has never occurred in Asia remains an “unsolved mystery” in global health. Most countries in Asia with high Aedes aegypti mosquito density are considered “receptive” for YF transmission. Recently, health officials in Sri Lanka issued a public health alert on the potential spread of YF from a migrant group from West Africa. We performed an extensive review of literature pertaining to the risk of YF in Sri Lanka/South Asian region to understand the probability of actual risk and assist health authorities to form evidence informed public health policies/practices. Published data from epidemiological, historical, biological, molecular, and mathematical models were harnessed to assess the risk of YF in Asia. Using this data we examine a number of theories proposed to explain lack of YF in Asia. Considering the evidence available, we conclude that the probable risk of local transmission of YF is extremely low in Sri Lanka and for other South Asian countries despite a high Aedes aegypti density and associated dengue burden. This does not however exclude the future possibility of transmission in Asia, especially considering the rapid influx travelers from endemic areas, as we report, arriving in Sri Lanka. PMID:24367789

  18. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process

    PubMed Central

    Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.

    2016-01-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures. PMID:27265878

  19. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process.

    PubMed

    Galindo, I; Romero, M C; Sánchez, N; Morales, J M

    2016-06-06

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.

  20. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process

    NASA Astrophysics Data System (ADS)

    Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.

    2016-06-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.

  1. Landscape configurational heterogeneity by small-scale agriculture, not crop diversity, maintains pollinators and plant reproduction in western Europe.

    PubMed

    Hass, Annika L; Kormann, Urs G; Tscharntke, Teja; Clough, Yann; Baillod, Aliette Bosem; Sirami, Clélia; Fahrig, Lenore; Martin, Jean-Louis; Baudry, Jacques; Bertrand, Colette; Bosch, Jordi; Brotons, Lluís; Burel, Françoise; Georges, Romain; Giralt, David; Marcos-García, María Á; Ricarte, Antonio; Siriwardena, Gavin; Batáry, Péter

    2018-02-14

    Agricultural intensification is one of the main causes for the current biodiversity crisis. While reversing habitat loss on agricultural land is challenging, increasing the farmland configurational heterogeneity (higher field border density) and farmland compositional heterogeneity (higher crop diversity) has been proposed to counteract some habitat loss. Here, we tested whether increased farmland configurational and compositional heterogeneity promote wild pollinators and plant reproduction in 229 landscapes located in four major western European agricultural regions. High-field border density consistently increased wild bee abundance and seed set of radish ( Raphanus sativus ), probably through enhanced connectivity. In particular, we demonstrate the importance of crop-crop borders for pollinator movement as an additional experiment showed higher transfer of a pollen analogue along crop-crop borders than across fields or along semi-natural crop borders. By contrast, high crop diversity reduced bee abundance, probably due to an increase of crop types with particularly intensive management. This highlights the importance of crop identity when higher crop diversity is promoted. Our results show that small-scale agricultural systems can boost pollinators and plant reproduction. Agri-environmental policies should therefore aim to halt and reverse the current trend of increasing field sizes and to reduce the amount of crop types with particularly intensive management. © 2018 The Author(s).

  2. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  3. Colored noise and a stochastic fractional model for correlated inputs and adaptation in neuronal firing.

    PubMed

    Pirozzi, Enrica

    2018-04-01

    High variability in the neuronal response to stimulations and the adaptation phenomenon cannot be explained by the standard stochastic leaky integrate-and-fire model. The main reason is that the uncorrelated inputs involved in the model are not realistic. There exists some form of dependency between the inputs, and it can be interpreted as memory effects. In order to include these physiological features in the standard model, we reconsider it with time-dependent coefficients and correlated inputs. Due to its hard mathematical tractability, we perform simulations of it for a wide investigation of its output. A Gauss-Markov process is constructed for approximating its non-Markovian dynamics. The first passage time probability density of such a process can be numerically evaluated, and it can be used to fit the histograms of simulated firing times. Some estimates of the moments of firing times are also provided. The effect of the correlation time of the inputs on firing densities and on firing rates is shown. An exponential probability density of the first firing time is estimated for low values of input current and high values of correlation time. For comparison, a simulation-based investigation is also carried out for a fractional stochastic model that allows to preserve the memory of the time evolution of the neuronal membrane potential. In this case, the memory parameter that affects the firing activity is the fractional derivative order. In both models an adaptation level of spike frequency is attained, even if along different modalities. Comparisons and discussion of the obtained results are provided.

  4. Modulation Based on Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  5. A partial differential equation for pseudocontact shift.

    PubMed

    Charnock, G T P; Kuprov, Ilya

    2014-10-07

    It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.

  6. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    PubMed

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  7. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  8. Intermittent burst of a super rogue wave in the breathing multi-soliton regime of an anomalous fiber ring cavity.

    PubMed

    Lee, Seungjong; Park, Kyoungyoon; Kim, Hyuntai; Vazquez-Zuniga, Luis Alonso; Kim, Jinseob; Jeong, Yoonchan

    2018-04-30

    We report the intermittent burst of a super rogue wave in the multi-soliton (MS) regime of an anomalous-dispersion fiber ring cavity. We exploit the spatio-temporal measurement technique to log and capture the shot-to-shot wave dynamics of various pulse events in the cavity, and obtain the corresponding intensity probability density function, which eventually unveils the inherent nature of the extreme events encompassed therein. In the breathing MS regime, a specific MS regime with heavy soliton population, the natural probability of pulse interaction among solitons and dispersive waves exponentially increases owing to the extraordinarily high soliton population density. Combination of the probabilistically started soliton interactions and subsequently accompanying dispersive waves in their vicinity triggers an avalanche of extreme events with even higher intensities, culminating to a burst of a super rogue wave nearly ten times stronger than the average solitons observed in the cavity. Without any cavity modification or control, the process naturally and intermittently recurs within a time scale in the order of ten seconds.

  9. Adjustments for the display of quantized ion channel dwell times in histograms with logarithmic bins.

    PubMed

    Stark, J A; Hladky, S B

    2000-02-01

    Dwell-time histograms are often plotted as part of patch-clamp investigations of ion channel currents. The advantages of plotting these histograms with a logarithmic time axis were demonstrated by, J. Physiol. (Lond.). 378:141-174), Pflügers Arch. 410:530-553), and, Biophys. J. 52:1047-1054). Sigworth and Sine argued that the interpretation of such histograms is simplified if the counts are presented in a manner similar to that of a probability density function. However, when ion channel records are recorded as a discrete time series, the dwell times are quantized. As a result, the mapping of dwell times to logarithmically spaced bins is highly irregular; bins may be empty, and significant irregularities may extend beyond the duration of 100 samples. Using simple approximations based on the nature of the binning process and the transformation rules for probability density functions, we develop adjustments for the display of the counts to compensate for this effect. Tests with simulated data suggest that this procedure provides a faithful representation of the data.

  10. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, Paul B.

    Paralleling our recent computationally intensive (quasi-Monte Carlo) work for the case N=4 (e-print quant-ph/0308037), we undertake the task for N=6 of computing to high numerical accuracy, the formulas of Sommers and Zyczkowski (e-print quant-ph/0304041) for the (N{sup 2}-1)-dimensional volume and (N{sup 2}-2)-dimensional hyperarea of the (separable and nonseparable) NxN density matrices, based on the Bures (minimal monotone) metric--and also their analogous formulas (e-print quant-ph/0302197) for the (nonmonotone) flat Hilbert-Schmidt metric. With the same seven 10{sup 9} well-distributed ('low-discrepancy') sample points, we estimate the unknown volumes and hyperareas based on five additional (monotone) metrics of interest, including the Kubo-Mori and Wigner-Yanase.more » Further, we estimate all of these seven volume and seven hyperarea (unknown) quantities when restricted to the separable density matrices. The ratios of separable volumes (hyperareas) to separable plus nonseparable volumes (hyperareas) yield estimates of the separability probabilities of generically rank-6 (rank-5) density matrices. The (rank-6) separability probabilities obtained based on the 35-dimensional volumes appear to be--independently of the metric (each of the seven inducing Haar measure) employed--twice as large as those (rank-5 ones) based on the 34-dimensional hyperareas. (An additional estimate--33.9982--of the ratio of the rank-6 Hilbert-Schmidt separability probability to the rank-4 one is quite clearly close to integral too.) The doubling relationship also appears to hold for the N=4 case for the Hilbert-Schmidt metric, but not the others. We fit simple exact formulas to our estimates of the Hilbert-Schmidt separable volumes and hyperareas in both the N=4 and N=6 cases.« less

  12. A summary of transition probabilities for atomic absorption lines formed in low-density clouds

    NASA Technical Reports Server (NTRS)

    Morton, D. C.; Smith, W. H.

    1973-01-01

    A table of wavelengths, statistical weights, and excitation energies is given for 944 atomic spectral lines in 221 multiplets whose lower energy levels lie below 0.275 eV. Oscillator strengths were adopted for 635 lines in 155 multiplets from the available experimental and theoretical determinations. Radiation damping constants also were derived for most of these lines. This table contains the lines most likely to be observed in absorption in interstellar clouds, circumstellar shells, and the clouds in the direction of quasars where neither the particle density nor the radiation density is high enough to populate the higher levels. All ions of all elements from hydrogen to zinc are included which have resonance lines longward of 912 A, although a number of weaker lines of neutrals and first ions have been omitted.

  13. Correlated vortex pinning in Si-nanoparticle doped MgB 2

    NASA Astrophysics Data System (ADS)

    Kušević, I.; Babić, E.; Husnjak, O.; Soltanian, S.; Wang, X. L.; Dou, S. X.

    2004-12-01

    The magnetoresistivity and critical current density of well characterized Si-nanoparticle doped and undoped Cu-sheathed MgB 2 tapes have been measured at temperatures T≥28 K in magnetic fields B≤0.9 T. The irreversibility line Birr( T) for doped tape shows a stepwise variation with a kink around 0.3 T. Such Birr( T) variation is typical for high-temperature superconductors with columnar defects (a kink occurs near the matching field Bϕ) and is very different from a smooth Birr( T) variation in undoped MgB 2 samples. The microstructure studies of nanoparticle doped MgB 2 samples show uniformly dispersed nanoprecipitates, which probably act as a correlated disorder. The observed difference between the field variations of the critical current density and pinning force density of the doped and undoped tape supports the above findings.

  14. High fracture probability predicts fractures in a 4-year follow-up in women from the RAC-OST-POL study.

    PubMed

    Pluskiewicz, W; Adamczyk, P; Czekajło, A; Grzeszczak, W; Drozdzowska, B

    2015-12-01

    In 770 postmenopausal women, the fracture incidence during a 4-year follow-up was analyzed in relation to the fracture probability (FRAX risk assessment tool) and risk (Garvan risk calculator) predicted at baseline. Incident fractures occurred in 62 subjects with a higher prevalence in high-risk subgroups. Prior fracture, rheumatoid arthritis, femoral neck T-score and falls increased independent of fracture incidence. The aim of the study was to analyze the incidence of fractures during a 4-year follow-up in relation to the baseline fracture probability and risk. Enrolled in the study were 770 postmenopausal women with a mean age of 65.7 ± 7.3 years. Bone mineral density (BMD) at the proximal femur, clinical data, and fracture probability using the FRAX tool and risk using the Garvan calculator were determined. Each subject was asked yearly by phone call about the incidence of fracture during the follow-up period. Of the 770 women, 62 had a fracture during follow-up, and 46 had a major fracture. At baseline, BMD was significantly lower, and fracture probability and fracture risk were significantly higher in women who had a fracture. Among women with a major fracture, the percentage with a high baseline fracture probability (>10 %) was significantly higher than among those without a fracture (p < 0.01). Fracture incidence during follow-up was significantly higher among women with a high baseline fracture probability (12.7 % vs. 5.2 %) and a high fracture risk (9.2 vs. 5.3 %) so that the "fracture-free survival" curves were significantly different (p < 0.05). The number of clinical risk factors noted at baseline was significantly associated with fracture incidence (chi-squared = 20.82, p < 0.01). Prior fracture, rheumatoid arthritis, and femoral neck T-score were identified as significant risk factors for major fractures (for any fractures, the influence of falls was also significant). During follow-up, fracture incidence was predicted by baseline fracture probability (FRAX risk assessment tool) and risk (Garvan risk calculator). A number of clinical risk factors and a prior fracture, rheumatoid arthritis, femoral neck T-score, and falls were independently associated with an increased incidence of fractures. [Corrected

  15. Density dependent interactions between VA mycorrhizal fungi and even-aged seedlings of two perennial Fabaceae species.

    PubMed

    Allsopp, N; Stock, W D

    1992-08-01

    The interaction of density and mycorrhizal effects on the growth, mineral nutrition and size distribution of seedlings of two perennial members of the Fabaceae was investigated in pot culture. Seedlings of Otholobium hirtum and Aspalathus linearis were grown at densities of 1, 4, 8 and 16 plants per 13-cm pot with or without vesicular-arbuscular (VA) mycorrhizal inoculum for 120 days. Plant mass, relative growth rates, height and leaf number all decreased with increasing plant density. This was ascribed to the decreasing availability of phosphorus per plant as density increased. O. hirtum was highly dependent on mycorrhizas for P uptake but both mycorrhizal and non-mycorrhizal A. linearis seedlings were able to extract soil P with equal ease. Plant size distribution as measured by the coefficient of variation (CV) of shoot mass was greater at higher densities. CVs of mycorrhizal O. hirtum plants were higher than those of non-mycorrhizal plants. CVs of the facultatively mycorrhizal A. linearis were similar for both mycorrhizal and non-mycorrhizal plants. Higher CVs are attributed to resource preemption by larger individuals. Individuals in populations with high CVs will probably survive stress which would result in the extinction of populations with low CVs. Mass of mycorrhizal plants of both species decreased more rapidly with increasing density than did non-mycorrhizal plant mass. It is concluded that the cost of being mycorrhizal increases as plant density increases, while the benefit decreases. The results suggest that mycorrhizas will influence density-dependent population processes of faculative and obligate mycorrhizal species.

  16. The role of demographic compensation theory in incidental take assessments for endangered species

    USGS Publications Warehouse

    McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts

    2011-01-01

    Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.

  17. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  18. Effects of shifts in the rate of repetitive stimulation on sustained attention

    NASA Technical Reports Server (NTRS)

    Krulewitz, J. E.; Warm, J. S.; Wohl, T. H.

    1975-01-01

    The effects of shifts in the rate of presentation of repetitive neutral events (background event rate) were studied in a visual vigilance task. Four groups of subjects experienced either a high (21 events/min) or a low (6 events/min) event rate for 20 min and then experienced either the same or the alternate event rate for an additional 40 min. The temporal occurrence of critical target signals was identical for all groups, irrespective of event rate. The density of critical signals was 12 signals/20 min. By the end of the session, shifts in event rate were associated with changes in performance which resembled contrast effects found in other experimental situations in which shift paradigms were used. Relative to constant event rate control conditions, a shift from a low to a high event rate depressed the probability of signal detections, while a shift in the opposite direction enhanced the probability of signal detections.

  19. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  20. The magma ocean as an impediment to lunar plate tectonics

    NASA Technical Reports Server (NTRS)

    Warren, Paul H.

    1993-01-01

    The primary impediment to plate tectonics on the moon was probably the great thickness of its crust and particularly its high crust/lithosphere thickness ratio. This in turn can be attributed to the preponderance of low-density feldspar over all other Al-compatible phases in the lunar interior. During the magma ocean epoch, the moon's crust/lithosphere thickness ratio was at the maximum theoretical value, approximately 1, and it remained high for a long time afterwards. A few large regions of thin crust were produced by basin-scale cratering approximately contemporaneous with the demise of the magma ocean. However, these regions probably also tend to have uncommonly thin lithosphere, since they were directly heated and indirectly enriched in K, Th, and U by the same cratering process. Thus, plate tectonics on the moon in the form of systematic lithosphere subduction was impeded by the magma ocean.

  1. Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps

    DOE Data Explorer

    Adam Brandt

    2015-11-15

    This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.

  2. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  3. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  4. PFOS induced lipid metabolism disturbances in BALB/c mice through inhibition of low density lipoproteins excretion

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Wang, Yu; Liang, Yong; Li, Jia; Liu, Yuchen; Zhang, Jie; Zhang, Aiqian; Fu, Jianjie; Jiang, Guibin

    2014-04-01

    Male BALB/c mice fed with either a regular or high fat diet were exposed to 0, 5 or 20 mg/kg perfluorooctane sulfonate (PFOS) for 14 days. Increased body weight, serum glucose, cholesterol and lipoprotein levels were observed in mice given a high fat diet. However, all PFOS-treated mice got reduced levels of serum lipid and lipoprotein. Decreasing liver glycogen content was also observed, accompanied by reduced serum glucose levels. Histological and ultrastructural examination detected more lipid droplets accumulated in hepatocytes after PFOS exposure. Moreover, transcripitonal activity of lipid metabolism related genes suggests that PFOS toxicity is probably unrelevant to PPARα's transcription. The present study demonstrates a lipid disturbance caused by PFOS and thus point to its role in inhibiting the secretion and normal function of low density lipoproteins.

  5. Materials for high-temperature thermoelectric conversion

    NASA Technical Reports Server (NTRS)

    Feigelson, R. S.; Elwell, D.

    1983-01-01

    High boron materials of high efficiency for thermoelectric power generation and capable of prolonged operation at temperatures over 1200 C are discussed. Background theoretical studies indicated that the low carrier mobility of materials with beta boron and related structures is probably associated with the high density of traps. Experimental work was mainly concerned with silicon borides in view of promising data from European laboratories. A systematic study using structure determination and lattice constant measurements failed to confirm the existence of an SiBn phase. Only SiB6 and a solid solution of silicon in beta boron with a maximum solid solubility of 5.5-6 at % at 1650 C were found.

  6. Permeability structure and its influence on microbial activity at off-Shimokita basin, Japan

    NASA Astrophysics Data System (ADS)

    Tanikawa, W.; Yamada, Y.; Sanada, Y.; Kubo, Y.; Inagaki, F.

    2016-12-01

    The microbial populations and the limit of microbial life are probably limited by chemical, physical, and geological conditions, such as temperature, pore water chemistry, pH, and water activity; however, the key parameters affecting growth in deep subseafloor sediments remain unclarified (Hinrichs and Inagaki 2012). IODP expedition 337 was conducted near a continental margin basin off Shimokita Peninsula, Japan to investigate the microbial activity under deep marine coalbed sediments down to 2500 mbsf. Inagaki et al. (2015) discovered that microbial abundance decreased markedly with depth (the lowest cell density of <1 cell/cm3 was recorded below 2000 mbsf), and that the coal bed layers had relatively higher cell densities. In this study, permeability was measured on core samples from IODP Expedition 337 and Expedition CK06-06 in the D/V Chikyu shakedown cruise. Permeability was measured at in-situ effective pressure condition. Permeability was calculated by the steady state flow method by keeping differential pore pressure from 0.1 to 0.8 MPa.Our results show that the permeability for core samples decreases with depth from 10-16 m2 on the seafloor to 10-20 m2 at the bottom of hole. However, permeability is highly scattered within the coal bed unit (1900 to 2000 mbsf). Permeabilities for sandstone and coal is higher than those for siltstone and shale, therefore the scatter of the permeabilities at the same unit is due to the high variation of lithology. The highest permeability was observed in coal samples and this is probably due to formation of micro cracks (cleats). Permeability estimated from the NMR logging using the empirical parameters is around two orders of magnitude higher than permeability of core samples, even though the relative permeability variation at vertical direction is quite similar between core and logging data.The higher cell density is observed in the relatively permeable formation. On the other hand, the correlation between cell density, water activity, and porosity is not clear. On the assumption that pressure gradient is constant through the depth, flow rate can be proportional to permeability of sediments. Flow rate probably restricts the availability of energy and nutrient for microorganism, therefore permeability might have influenced on the microbial activity in the coalbed basin.

  7. Anurans in a Subarctic Tundra Landscape Near Cape Churchill, Manitoba

    USGS Publications Warehouse

    Reiter, M.E.; Boal, C.W.; Andersen, D.E.

    2008-01-01

    Distribution, abundance, and habitat relationships of anurans inhabiting subarctic regions are poorly understood, and anuran monitoring protocols developed for temperate regions may not be applicable across large roadless areas of northern landscapes. In addition, arctic and subarctic regions of North America are predicted to experience changes in climate and, in some areas, are experiencing habitat alteration due to high rates of herbivory by breeding and migrating waterfowl. To better understand subarctic anuran abundance, distribution, and habitat associations, we conducted anuran calling surveys in the Cape Churchill region of Wapusk National Park, Manitoba, Canada, in 2004 and 2005. We conducted surveys along ~l-km transects distributed across three landscape types (coastal tundra, interior sedge meadow-tundra, and boreal forest-tundra interface) to estimate densities and probabilities of detection of Boreal Chorus Frogs (Pseudacris maculata) and Wood Frogs (Lithobates sylvaticus). We detected a Wood Frog or Boreal Chorus Frog on 22 (87%) of 26 transects surveyed, but probability of detection varied between years and species and among landscape types. Estimated densities of both species increased from the coastal zone inland toward the boreal forest edge. Our results suggest anurans occur across all three landscape types in our study area, but that species-specific spatial patterns exist in their abundances. Considerations for both spatial and temporal variation in abundance and detection probability need to be incorporated into surveys and monitoring programs for subarctic anurans.

  8. Optimization of Systems with Uncertainty: Initial Developments for Performance, Robustness and Reliability Based Designs

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.

  9. Probability Density Functions of the Solar Wind Driver of the Magnetopshere-Ionosphere System

    NASA Astrophysics Data System (ADS)

    Horton, W.; Mays, M. L.

    2007-12-01

    The solar-wind driven magnetosphere-ionosphere system is a complex dynamical system in that it exhibits (1) sensitivity to initial conditions; (2) multiple space-time scales; (3) bifurcation sequences with hysteresis in transitions between attractors; and (4) noncompositionality. This system is modeled by WINDMI--a network of eight coupled ordinary differential equations which describe the transfer of power from the solar wind through the geomagnetic tail, the ionosphere, and ring current in the system. The model captures both storm activity from the plasma ring current energy, which yields a model Dst index result, and substorm activity from the region 1 field aligned current, yielding model AL and AU results. The input to the model is the solar wind driving voltage calculated from ACE solar wind parameter data, which has a regular coherent component and broad-band turbulent component. Cross correlation functions of the input-output data time series are computed and the conditional probability density function for the occurrence of substorms given earlier IMF conditions are derived. The model shows a high probability of substorms for solar activity that contains a coherent, rotating IMF with magnetic cloud features. For a theoretical model of the imprint of solar convection on the solar wind we have used the Lorenz attractor (Horton et al., PoP, 1999, doi:10.10631.873683) as a solar wind driver. The work is supported by NSF grant ATM-0638480.

  10. Stochastic population dynamics of a montane ground-dwelling squirrel.

    PubMed

    Hostetler, Jeffrey A; Kneip, Eva; Van Vuren, Dirk H; Oli, Madan K

    2012-01-01

    Understanding the causes and consequences of population fluctuations is a central goal of ecology. We used demographic data from a long-term (1990-2008) study and matrix population models to investigate factors and processes influencing the dynamics and persistence of a golden-mantled ground squirrel (Callospermophilus lateralis) population, inhabiting a dynamic subalpine habitat in Colorado, USA. The overall deterministic population growth rate λ was 0.94±SE 0.05 but it varied widely over time, ranging from 0.45±0.09 in 2006 to 1.50±0.12 in 2003, and was below replacement (λ<1) for 9 out of 18 years. The stochastic population growth rate λ(s) was 0.92, suggesting a declining population; however, the 95% CI on λ(s) included 1.0 (0.52-1.60). Stochastic elasticity analysis showed that survival of adult females, followed by survival of juvenile females and litter size, were potentially the most influential vital rates; analysis of life table response experiments revealed that the same three life history variables made the largest contributions to year-to year changes in λ. Population viability analysis revealed that, when the influences of density dependence and immigration were not considered, the population had a high (close to 1.0 in 50 years) probability of extinction. However, probability of extinction declined to as low as zero when density dependence and immigration were considered. Destabilizing effects of stochastic forces were counteracted by regulating effects of density dependence and rescue effects of immigration, which allowed our study population to bounce back from low densities and prevented extinction. These results suggest that dynamics and persistence of our study population are determined synergistically by density-dependence, stochastic forces, and immigration.

  11. Stochastic Population Dynamics of a Montane Ground-Dwelling Squirrel

    PubMed Central

    Hostetler, Jeffrey A.; Kneip, Eva; Van Vuren, Dirk H.; Oli, Madan K.

    2012-01-01

    Understanding the causes and consequences of population fluctuations is a central goal of ecology. We used demographic data from a long-term (1990–2008) study and matrix population models to investigate factors and processes influencing the dynamics and persistence of a golden-mantled ground squirrel (Callospermophilus lateralis) population, inhabiting a dynamic subalpine habitat in Colorado, USA. The overall deterministic population growth rate λ was 0.94±SE 0.05 but it varied widely over time, ranging from 0.45±0.09 in 2006 to 1.50±0.12 in 2003, and was below replacement (λ<1) for 9 out of 18 years. The stochastic population growth rate λs was 0.92, suggesting a declining population; however, the 95% CI on λs included 1.0 (0.52–1.60). Stochastic elasticity analysis showed that survival of adult females, followed by survival of juvenile females and litter size, were potentially the most influential vital rates; analysis of life table response experiments revealed that the same three life history variables made the largest contributions to year-to year changes in λ. Population viability analysis revealed that, when the influences of density dependence and immigration were not considered, the population had a high (close to 1.0 in 50 years) probability of extinction. However, probability of extinction declined to as low as zero when density dependence and immigration were considered. Destabilizing effects of stochastic forces were counteracted by regulating effects of density dependence and rescue effects of immigration, which allowed our study population to bounce back from low densities and prevented extinction. These results suggest that dynamics and persistence of our study population are determined synergistically by density-dependence, stochastic forces, and immigration. PMID:22479616

  12. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  13. A matrix-based approach to solving the inverse Frobenius-Perron problem using sequences of density functions of stochastically perturbed dynamical systems.

    PubMed

    Nie, Xiaokai; Coca, Daniel

    2018-01-01

    The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.

  14. Simulated effect of pig-population density on epidemic size and choice of control strategy for classical swine fever epidemics in The Netherlands.

    PubMed

    Mangen, M-J J; Nielen, M; Burrell, A M

    2002-12-18

    We examined the importance of pig-population density in the area of an outbreak of classical swine fever (CSF) for the spread of the infection and the choice of control measures. A spatial, stochastic, dynamic epidemiological simulation model linked to a sector-level market-and-trade model for The Netherlands were used. Outbreaks in sparsely and densely populated areas were compared under four different control strategies and with two alternative trade assumptions. The obligatory control strategy required by current EU legislation was predicted to be enough to eradicate an epidemic starting in an area with sparse pig population. By contrast, additional control measures would be necessary if the outbreak began in an area with high pig density. The economic consequences of using preventive slaughter rather than emergency vaccination as an additional control measure depended strongly on the reactions of trading partners. Reducing the number of animal movements significantly reduced the size and length of epidemics in areas with high pig density. The phenomenon of carrier piglets was included in the model with realistic probabilities of infection by this route, but it made a negligible contribution to the spread of the infection.

  15. Plasma Properties of Microwave Produced Plasma in a Toroidal Device

    NASA Astrophysics Data System (ADS)

    Singh, Ajay; Edwards, W. F.; Held, Eric

    2011-10-01

    We have modified a small tokamak, STOR-1M, on loan from University of Saskatchewan, to operate as a low-temperature (~5 eV) toroidal plasma machine with externally induced toroidal magnetic fields ranging from zero to ~50 G. The plasma is produced using microwave discharges at relatively high pressures. Microwaves are produced by a kitchen microwave-oven magnetron operating at 2.45 GHz in continuous operating mode, resulting in pulses ~0.5 s in duration. Initial measurements of plasma formation in this device with and without applied magnetic fields are presented. Plasma density and temperature profiles have been measured using Langmuir probes and the magnetic field profile inside the plasma has been obtained using Hall probes. When the discharge is created with no applied toroidal magnetic field, the plasma does not fill the entire torus due to high background pressure. However, when a toroidal magnetic field is applied, the plasma flows along the applied field, filling the torus. Increasing the applied magnetic field seems to aid plasma formation - the peak density increases and the density gradient becomes steeper. Above a threshold magnetic field, the plasma develops low-frequency density oscillations due to probable excitation of flute modes in the plasma.

  16. The risks and returns of stock investment in a financial market

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Mei, Dong-Cheng

    2013-03-01

    The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.

  17. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  18. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  19. Characterization of nonGaussian atmospheric turbulence for prediction of aircraft response statistics

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1977-01-01

    Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.

  20. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  1. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  2. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  3. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  4. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  5. The effect of hubs and shortcuts on fixation time in evolutionary graphs

    NASA Astrophysics Data System (ADS)

    Askari, Marziyeh; Moradi Miraghaei, Zeinab; Aghababaei Samani, Keivan

    2017-07-01

    How can a new species (like a gene, an idea, or a strategy) take over the whole of a population? This process, which is called fixation, is considerably affected by the structure of the population. There are two key quantities to quantify the fixation process, namely fixation probability and fixation time. Fixation probability has been vastly studied in recent years, but fixation time has not been completely explored, yet. This is because the discovery of a relationship between fixation time and network structure is quite challenging. In this paper we investigate this relationship for a number of well-known complex networks. We show that the existence of a few high-degree nodes (hubs) in the network results in a longer fixation time, while the existence of a few short-cuts decreases the fixation time. Furthermore we investigate the effect of network parameters, such as connection probability, on fixation time. We show that by increasing the density of edges, fixation time decreases for all types of studied networks. Finally, we survey the effect of rewiring probability in a Watts-Strogatz network on fixation time.

  6. Bayesian anomaly detection in monitoring data applying relevance vector machine

    NASA Astrophysics Data System (ADS)

    Saito, Tomoo

    2011-04-01

    A method for automatically classifying the monitoring data into two categories, normal and anomaly, is developed in order to remove anomalous data included in the enormous amount of monitoring data, applying the relevance vector machine (RVM) to a probabilistic discriminative model with basis functions and their weight parameters whose posterior PDF (probabilistic density function) conditional on the learning data set is given by Bayes' theorem. The proposed framework is applied to actual monitoring data sets containing some anomalous data collected at two buildings in Tokyo, Japan, which shows that the trained models discriminate anomalous data from normal data very clearly, giving high probabilities of being normal to normal data and low probabilities of being normal to anomalous data.

  7. An investigation of student understanding of classical ideas related to quantum mechanics: Potential energy diagrams and spatial probability density

    NASA Astrophysics Data System (ADS)

    Stephanik, Brian Michael

    This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.

  8. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  9. Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.

    PubMed

    Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor

    2015-11-01

    Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Estimating The Probability Of Achieving Shortleaf Pine Regeneration At Variable Specified Levels

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2002-01-01

    A model was developed that can be used to estimate the probability of achieving regeneration at a variety of specified stem density levels. The model was fitted to shortleaf pine (Pinus echinata Mill.) regeneration data, and can be used to estimate the probability of achieving desired levels of regeneration between 300 and 700 stems per acre 9-l 0...

  11. Properties of Traffic Risk Coefficient

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu

    2009-10-01

    We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.

  12. Probability mass first flush evaluation for combined sewer discharges.

    PubMed

    Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong

    2010-01-01

    The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.

  13. Comparison of sticking probabilities of metal atoms in magnetron sputtering deposition of CuZnSnS films

    NASA Astrophysics Data System (ADS)

    Sasaki, K.; Kikuchi, S.

    2014-10-01

    In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.

  14. Depletion of mesospheric sodium during extended period of pulsating aurora

    NASA Astrophysics Data System (ADS)

    Takahashi, T.; Hosokawa, K.; Nozawa, S.; Tsuda, T. T.; Ogawa, Y.; Tsutsumi, M.; Hiraki, Y.; Fujiwara, H.; Kawahara, T. D.; Saito, N.; Wada, S.; Kawabata, T.; Hall, C.

    2017-01-01

    We quantitatively evaluated the Na density depletion due to charge transfer reactions between Na atoms and molecular ions produced by high-energy electron precipitation during a pulsating aurora (PsA). An extended period of PsA was captured by an all-sky camera at the European Incoherent Scatter (EISCAT) radar Tromsø site (69.6°N, 19.2°E) during a 2 h interval from 00:00 to 02:00 UT on 25 January 2012. During this period, using the EISCAT very high frequency (VHF) radar, we detected three intervals of intense ionization below 100 km that were probably caused by precipitation of high-energy electrons during the PsA. In these intervals, the sodium lidar at Tromsø observed characteristic depletion of Na density at altitudes between 97 and 100 km. These Na density depletions lasted for 8 min and represented 5-8% of the background Na layer. To examine the cause of this depletion, we modeled the depletion rate based on charge transfer reactions with NO+ and O2+ while changing the R value which is defined as the ratio of NO+ to O2+ densities, from 1 to 10. The correlation coefficients between observed and modeled Na density depletion calculated with typical value R = 3 for time intervals T1, T2, and T3 were 0.66, 0.80, and 0.67, respectively. The observed Na density depletion rates fall within the range of modeled depletion rate calculated with R from 1 to 10. This suggests that the charge transfer reactions triggered by the auroral impact ionization at low altitudes are the predominant process responsible for Na density depletion during PsA intervals.

  15. Statistical analysis of dislocations and dislocation boundaries from EBSD data.

    PubMed

    Moussa, C; Bernacki, M; Besnard, R; Bozzolo, N

    2017-08-01

    Electron BackScatter Diffraction (EBSD) is often used for semi-quantitative analysis of dislocations in metals. In general, disorientation is used to assess Geometrically Necessary Dislocations (GNDs) densities. In the present paper, we demonstrate that the use of disorientation can lead to inaccurate results. For example, using the disorientation leads to different GND density in recrystallized grains which cannot be physically justified. The use of disorientation gradients allows accounting for measurement noise and leads to more accurate results. Misorientation gradient is then used to analyze dislocations boundaries following the same principle applied on TEM data before. In previous papers, dislocations boundaries were defined as Geometrically Necessary Boundaries (GNBs) and Incidental Dislocation Boundaries (IDBs). It has been demonstrated in the past, through transmission electron microscopy data, that the probability density distribution of the disorientation of IDBs and GNBs can be described with a linear combination of two Rayleigh functions. Such function can also describe the probability density of disorientation gradient obtained through EBSD data as reported in this paper. This opens the route for determining IDBs and GNBs probability density distribution functions separately from EBSD data, with an increased statistical relevance as compared to TEM data. The method is applied on deformed Tantalum where grains exhibit dislocation boundaries, as observed using electron channeling contrast imaging. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Tip-growing cells of the moss Ceratodon purpureus Are gravitropic in high-density media

    NASA Technical Reports Server (NTRS)

    Schwuchow, Jochen Michael; Kern, Volker Dieter; Sack, Fred David

    2002-01-01

    Gravity sensing in plants and algae is hypothesized to rely upon either the mass of the entire cell or that of sedimenting organelles (statoliths). Protonemata of the moss Ceratodon purpureus show upward gravitropism and contain amyloplasts that sediment. If moss sensing were whole-cell based, then media denser than the cell should prevent gravitropism or reverse its direction. Cells that were inverted or reoriented to the horizontal displayed distinct negative gravitropism in solutions of iodixanol with densities of 1.052 to 1.320 as well as in bovine serum albumin solutions with densities of 1.037 to 1.184 g cm(-3). Studies using tagged molecules of different sizes and calculations of diffusion times suggest that both types of media penetrate through the apical cell wall. Estimates of the density of the apical cell range from 1.004 to 1.085. Because protonemata grow upward when the cells have a density that is lower than the surrounding medium, gravitropic sensing probably utilizes an intracellular mass in moss protonemata. These data provide additional support for the idea that sedimenting amyloplasts function as statoliths in gravitropism.

  17. Stochastic chaos induced by diffusion processes with identical spectral density but different probability density functions.

    PubMed

    Lei, Youming; Zheng, Fan

    2016-12-01

    Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.

  18. Self-Supervised Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2003-01-01

    Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.

  19. Irradiation stratigraphy and depositional history of the Apollo 16 double drive tube 60009/10

    NASA Technical Reports Server (NTRS)

    Blanford, G. E.; Blanford, J.; Hawkins, J. A.

    1979-01-01

    We report track density frequency distributions, the fraction of high density grains and minimum track densities for 63, 1 mm wide locations in the Apollo 16 double drive tube 60009/10. From these data we conclude that there are seven irradiation strata in the core. Only one buried reworking zone extending from 50-52 cm was found and it was exposed near the surface from 4.5-9 times 10 to the 6th y with a most probable exposure period of 6 times 10 to the 6th y. There is lack of conclusive data that this zone represents a reworking zone in which case the material below 52 cm most probably was exposed in situ for 4.5 times 10 to the 6th y and developed a reworking zone approximately less than 0.5 cm. The present surface of the core has a reworking zone of 12-13 cm which was exposed from 1.3 times 10 to the 7th to 2.5 times 10 to the 8th y. The best estimate for this exposure period remains the value of approximately less than 1.25 times 10 to the 8th y determined by Bogard and Hirsch (1976). The other strata in the core appear to contain mixtures of various soil types and are not related to in situ depositional events.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Versino, Daniele; Bronkhorst, Curt Allan

    The computational formulation of a micro-mechanical material model for the dynamic failure of ductile metals is presented in this paper. The statistical nature of porosity initiation is accounted for by introducing an arbitrary probability density function which describes the pores nucleation pressures. Each micropore within the representative volume element is modeled as a thick spherical shell made of plastically incompressible material. The treatment of porosity by a distribution of thick-walled spheres also allows for the inclusion of micro-inertia effects under conditions of shock and dynamic loading. The second order ordinary differential equation governing the microscopic porosity evolution is solved withmore » a robust implicit procedure. A new Chebyshev collocation method is employed to approximate the porosity distribution and remapping is used to optimize memory usage. The adaptive approximation of the porosity distribution leads to a reduction of computational time and memory usage of up to two orders of magnitude. Moreover, the proposed model affords consistent performance: changing the nucleation pressure probability density function and/or the applied strain rate does not reduce accuracy or computational efficiency of the material model. The numerical performance of the model and algorithms presented is tested against three problems for high density tantalum: single void, one-dimensional uniaxial strain, and two-dimensional plate impact. Here, the results using the integration and algorithmic advances suggest a significant improvement in computational efficiency and accuracy over previous treatments for dynamic loading conditions.« less

  1. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    NASA Astrophysics Data System (ADS)

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  2. Are there optimal densities for prairie birds?

    USGS Publications Warehouse

    Skagen, S.K.; Adams, A.A.Y.

    2010-01-01

    The major forces of food and predation shape fitness-enhancing decisions of birds at all stages of their life cycles. During the breeding season, birds can minimize nest loss due to predation by selecting sites with a lower probability of predation. To understand the environmental and social aspects and consequences of breedingsite selection in prairie birds, we explored variation in nest-survival patterns of the Lark Bunting (Calamospiza melanocorys) in the shortgrass prairie region of North America. Over four breeding seasons, we documented the survival of 405 nests, conducted 60 surveys to estimate bird densities, and measured several vegetative features to describe habitat structure in 24 randomly selected study plots. Nest survival varied with the buntings' density as described by a quadratic polynomial, increasing with density below 1.5 birds ha-1 and decreasing with density between 1.5 and 3 birds ha-1, suggesting that an optimal range of densities favors reproductive success of the Lark Bunting, which nests semi-colonially. Nest survival also increased with increasing vegetation structure of study plots and varied with age of the nest, increasing during early incubation and late in the nestling stage and declining slightly from mid-incubation to the middle of the nestling period. The existence of an optimal range of densities in this semi-colonial species can be elucidated by the "commodity-selection hypothesis" at low densities and density dependence at high densities. ?? The Cooper Ornithological Society 2010.

  3. The development, distribution and density of the PMCA2 calcium pump in rat cochlear hair cells

    PubMed Central

    Chen, Qingguo; Mahendrasingam, Shanthini; Tickle, Jacqueline A.; Hackney, Carole M.; Furness, David N.; Fettiplace, Robert

    2012-01-01

    Calcium is tightly regulated in cochlear outer hair cells (OHCs). It enters mainly via mechanotransducer (MT) channels and is extruded by the PMCA2 isoform of the plasma membrane calcium ATPase, mutations in which cause hearing loss. To assess how pump expression matches the demands of Ca2+ homeostasis, the distribution of PMCA2 at different cochlear locations during development was quantified using immunofluorescence and post-embedding immunogold labeling. The PMCA2 isoform was confined to stereociliary bundles, first appearing at the base of the cochlea around post-natal day 0 (P0) followed by the middle and then the apex by P3, and was unchanged after P8. The developmental appearance matches maturation of the MT channels in rat OHCs. High-resolution immunogold labeling in adult rats showed PMCA2 was distributed along the membranes of all three rows of OHC stereocilia at similar densities and at about a quarter the density in IHC stereocilia. The difference between OHCs and inner hair cells (IHCs) is similar to the ratio of their MT channel resting open probabilities. Gold particle counts revealed no difference in PMCA2 density between low- and high-frequency OHC bundles despite larger MT currents in high-frequency OHCs. The PMCA2 density in OHC stereocilia was determined in low- and high-frequency regions from calibration of immunogold particle counts as 2200/μm2 from which an extrusion rate of ~200 ions·s−1 per pump was inferred. The limited ability of PMCA2 to extrude the Ca2+ load through MT channels may constitute a major cause of OHC vulnerability and high-frequency hearing loss. PMID:22672315

  4. Quantum mechanical probability current as electromagnetic 4-current from topological EM fields

    NASA Astrophysics Data System (ADS)

    van der Mark, Martin B.

    2015-09-01

    Starting from a complex 4-potential A = αdβ we show that the 4-current density in electromagnetism and the probability current density in relativistic quantum mechanics are of identical form. With the Dirac-Clifford algebra Cl1,3 as mathematical basis, the given 4-potential allows topological solutions of the fields, quite similar to Bateman's construction, but with a double field solution that was overlooked previously. A more general nullvector condition is found and wave-functions of charged and neutral particles appear as topological configurations of the electromagnetic fields.

  5. First-passage problems: A probabilistic dynamic analysis for degraded structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1990-01-01

    Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.

  6. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  7. Estimating abundance of mountain lions from unstructured spatial sampling

    USGS Publications Warehouse

    Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.

    2012-01-01

    Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.

  8. Probability density function learning by unsupervised neurons.

    PubMed

    Fiori, S

    2001-10-01

    In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.

  9. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF.

    PubMed

    Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul

    2006-09-01

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.

  10. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Ke; Cai Jing; Brookeman, James

    2006-09-15

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF wasmore » calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF. The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.« less

  11. Intervening O vi Quasar Absorption Systems at Low Redshift: A Significant Baryon Reservoir.

    PubMed

    Tripp; Savage; Jenkins

    2000-05-01

    Far-UV echelle spectroscopy of the radio-quiet QSO H1821+643 (zem=0.297), obtained with the Space Telescope Imaging Spectrograph (STIS) at approximately 7 km s-1 resolution, reveals four definite O vi absorption-line systems and one probable O vi absorber at 0.15

  12. Determination of the density of surface states at the semiconductor-insulator interface in a metal-insulator-semiconductor structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulyamov, G., E-mail: Gulyamov1949@rambler.ru; Sharibaev, N. U.

    2011-02-15

    The temporal dependence of thermal generation of electrons from occupied surface states at the semiconductor-insulator interface in a metal-insulator-semiconductor structure is studied. It is established that, at low temperatures, the derivative of the probability of depopulation of occupied surface states with respect to energy is represented by the Dirac {delta} function. It is shown that the density of states of a finite number of discrete energy levels under high-temperature measurements manifests itself as a continuous spectrum, whereas this spectrum appears discrete at low temperatures. A method for processing the continuous spectrum of the density of surface states is suggested thatmore » method makes it possible to determine the discrete energy spectrum. The obtained results may be conducive to an increase in resolution of the method of non-stationary spectroscopy of surface states.« less

  13. Constraints on rapidity-dependent initial conditions from charged-particle pseudorapidity densities and two-particle correlations

    NASA Astrophysics Data System (ADS)

    Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.

    2017-10-01

    We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.

  14. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    PubMed Central

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871

  15. Twin density of aragonite in molluscan shells characterized using X-ray diffraction and transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Kogure, Toshihiro; Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Checa, Antonio G.; Sasaki, Takenori; Nagasawa, Hiromichi

    2014-07-01

    {110} twin density in aragonites constituting various microstructures of molluscan shells has been characterized using X-ray diffraction (XRD) and transmission electron microscopy (TEM), to find the factors that determine the density in the shells. Several aragonite crystals of geological origin were also investigated for comparison. The twin density is strongly dependent on the microstructures and species of the shells. The nacreous structure has a very low twin density regardless of the shell classes. On the other hand, the twin density in the crossed-lamellar (CL) structure has large variation among classes or subclasses, which is mainly related to the crystallographic direction of the constituting aragonite fibers. TEM observation suggests two types of twin structures in aragonite crystals with dense {110} twins: rather regulated polysynthetic twins with parallel twin planes, and unregulated polycyclic ones with two or three directions for the twin planes. The former is probably characteristic in the CL structures of specific subclasses of Gastropoda. The latter type is probably related to the crystal boundaries dominated by (hk0) interfaces in the microstructures with preferred orientation of the c-axis, and the twin density is mainly correlated to the crystal size in the microstructures.

  16. Ultraviolet electroluminescence from hybrid inorganic/organic ZnO/GaN/poly(3-hexylthiophene) dual heterojunctions.

    PubMed

    Chen, Yungting; Shih, Hanyu; Wang, Chunhsiung; Hsieh, Chunyi; Chen, Chihwei; Chen, Yangfang; Lin, Taiyuan

    2011-05-09

    Based on hybrid inorganic/organic n-ZnO nanorods/p-GaN thin film/poly(3-hexylthiophene)(P3HT) dual heterojunctions, the light emitting diode (LED) emits ultraviolet (UV) radiation (370 nm - 400 nm) and the whole visible light (400 nm -700 nm) at the low injection current density. Meanwhile, under the high injection current density, the UV radiation overwhelmingly dominates the room-temperature electroluminescence spectra, exponentially increases with the injection current density and possesses a narrow full width at half maximum less than 16 nm. Comparing electroluminescence with photoluminescence spectra, an enormously enhanced transition probability of the UV luminescence in the electroluminescence spectra was found. The P3HT layer plays an essential role in helping the UV emission from p-GaN material because of its hole-conductive characteristic as well as the band alignment with respect to p-GaN. With our new finding, the result shown here may pave a new route for the development of high brightness LEDs derived from hybrid inorganic/organic heterojuctions.

  17. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  18. Understanding star formation in molecular clouds. II. Signatures of gravitational collapse of IRDCs

    NASA Astrophysics Data System (ADS)

    Schneider, N.; Csengeri, T.; Klessen, R. S.; Tremblin, P.; Ossenkopf, V.; Peretto, N.; Simon, R.; Bontemps, S.; Federrath, C.

    2015-06-01

    We analyse column density and temperature maps derived from Herschel dust continuum observations of a sample of prominent, massive infrared dark clouds (IRDCs) i.e. G11.11-0.12, G18.82-0.28, G28.37+0.07, and G28.53-0.25. We disentangle the velocity structure of the clouds using 13CO 1→0 and 12CO 3→2 data, showing that these IRDCs are the densest regions in massive giant molecular clouds (GMCs) and not isolated features. The probability distribution function (PDF) of column densities for all clouds have a power-law distribution over all (high) column densities, regardless of the evolutionary stage of the cloud: G11.11-0.12, G18.82-0.28, and G28.37+0.07 contain (proto)-stars, while G28.53-0.25 shows no signs of star formation. This is in contrast to the purely log-normal PDFs reported for near and/or mid-IR extinction maps. We only find a log-normal distribution for lower column densities, if we perform PDFs of the column density maps of the whole GMC in which the IRDCs are embedded. By comparing the PDF slope and the radial column density profile of three of our clouds, we attribute the power law to the effect of large-scale gravitational collapse and to local free-fall collapse of pre- and protostellar cores for the highest column densities. A significant impact on the cloud properties from radiative feedback is unlikely because the clouds are mostly devoid of star formation. Independent from the PDF analysis, we find infall signatures in the spectral profiles of 12CO for G28.37+0.07 and G11.11-0.12, supporting the scenario of gravitational collapse. Our results are in line with earlier interpretations that see massive IRDCs as the densest regions within GMCs, which may be the progenitors of massive stars or clusters. At least some of the IRDCs are probably the same features as ridges (high column density regions with N> 1023 cm-2 over small areas), which were defined for nearby IR-bright GMCs. Because IRDCs are only confined to the densest (gravity dominated) cloud regions, the PDF constructed from this kind of a clipped image does not represent the (turbulence dominated) low column density regime of the cloud. The column density maps (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/578/A29

  19. Design and simulation of stratified probability digital receiver with application to the multipath communication

    NASA Technical Reports Server (NTRS)

    Deal, J. H.

    1975-01-01

    One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.

  20. Shock compression of a recrystallized anorthositic rock from Apollo 15

    NASA Technical Reports Server (NTRS)

    Ahrens, T. J.; Gibbons, R. V.; O'Keefe, J. D.

    1973-01-01

    Hugoniot measurements on 15,418, a recrystallized and brecciated gabbroic anorthosite, yield a value of the Hugoniot elastic limit (HEL) varying from 45 to 70 kbar as the final shock pressure is varied from 70 to 280 kbar. Above the HEL and to 150 kbar, the pressure-density Hugoniot is closely described by a hydrostatic equation of state constructed from ultrasonic data for single-crystal plagioclase and pyroxene. Above 150 kbar, the Hugoniot states indicate that a series of one or more shock-induced phase changes are occurring in the plagioclase and pyroxene. From Hugoniot data for both the single-crystal minerals and the Frederick diabase, we infer that the shock-induced high-pressure phases in 15,418 probably consists of a 3.71 g/cu cm density, high-pressure structure for plagioclase and a 4.70 g/cu cm perovskite-type structure for pyroxene.

  1. Study of the enhancement-mode AlGaN/GaN high electron mobility transistor with split floating gates

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Wang, Ning; Jiang, Ling-Li; Zhao, Hai-Yue; Lin, Xin-Peng; Yu, Hong-Yu

    2017-11-01

    In this work, the charge storage based split floating gates (FGs) enhancement mode (E-mode) AlGaN/GaN high electron mobility transistors (HEMTs) are studied. The simulation results reveal that under certain density of two dimensional electron gas, the variation tendency of the threshold voltage (Vth) with the variation of the blocking dielectric thickness depends on the FG charge density. It is found that when the length sum and isolating spacing sum of the FGs both remain unchanged, the Vth shall decrease with the increasing FGs number but maintaining the device as E-mode. It is also reported that for the FGs HEMT, the failure of a FG will lead to the decrease of Vth as well as the increase of drain current, and the failure probability can be improved significantly with the increase of FGs number.

  2. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (∼90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  3. Multiple model cardinalized probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Georgescu, Ramona; Willett, Peter

    2011-09-01

    The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.

  4. Brownian Motion with Active Fluctuations

    NASA Astrophysics Data System (ADS)

    Romanczuk, Pawel; Schimansky-Geier, Lutz

    2011-06-01

    We study the effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions. We distinguish between passive and active fluctuations. Passive fluctuations (e.g., thermal fluctuations) are independent of the orientation of the particle. In contrast, active ones point parallel or perpendicular to the time dependent orientation of the particle. We derive analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles, which yields an increased probability of low speeds in the presence of active fluctuations in comparison to the case of purely passive fluctuations. As a consequence, we predict sharply peaked Cartesian velocity probability densities at the origin. Finally, we show that such a behavior may also occur in non-Gaussian active fluctuations and discuss briefly correlations of the fluctuating stochastic forces.

  5. Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2009-04-17

    We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.

  6. Quasar probabilities and redshifts from WISE mid-IR through GALEX UV photometry

    NASA Astrophysics Data System (ADS)

    DiPompeo, M. A.; Bovy, J.; Myers, A. D.; Lang, D.

    2015-09-01

    Extreme deconvolution (XD) of broad-band photometric data can both separate stars from quasars and generate probability density functions for quasar redshifts, while incorporating flux uncertainties and missing data. Mid-infrared photometric colours are now widely used to identify hot dust intrinsic to quasars, and the release of all-sky WISE data has led to a dramatic increase in the number of IR-selected quasars. Using forced photometry on public WISE data at the locations of Sloan Digital Sky Survey (SDSS) point sources, we incorporate this all-sky data into the training of the XDQSOz models originally developed to select quasars from optical photometry. The combination of WISE and SDSS information is far more powerful than SDSS alone, particularly at z > 2. The use of SDSS+WISE photometry is comparable to the use of SDSS+ultraviolet+near-IR data. We release a new public catalogue of 5537 436 (total; 3874 639 weighted by probability) potential quasars with probability PQSO > 0.2. The catalogue includes redshift probabilities for all objects. We also release an updated version of the publicly available set of codes to calculate quasar and redshift probabilities for various combinations of data. Finally, we demonstrate that this method of selecting quasars using WISE data is both more complete and efficient than simple WISE colour-cuts, especially at high redshift. Our fits verify that above z ˜ 3 WISE colours become bluer than the standard cuts applied to select quasars. Currently, the analysis is limited to quasars with optical counterparts, and thus cannot be used to find highly obscured quasars that WISE colour-cuts identify in significant numbers.

  7. Impact of Ficoll density gradient centrifugation on major and trace element concentrations in erythrocytes and blood plasma.

    PubMed

    Lu, Ying; Ahmed, Sultan; Harari, Florencia; Vahter, Marie

    2015-01-01

    Ficoll density gradient centrifugation is widely used to separate cellular components of human blood. We evaluated the suitability to use erythrocytes and blood plasma obtained from Ficoll centrifugation for assessment of elemental concentrations. We determined 22 elements (from Li to U) in erythrocytes and blood plasma separated by direct or Ficoll density gradient centrifugation, using inductively coupled plasma mass spectrometry. Compared with erythrocytes and blood plasma separated by direct centrifugation, those separated by Ficoll had highly elevated iodine and Ba concentration, due to the contamination from the Ficoll-Paque medium, and about twice as high concentrations of Sr and Mo in erythrocytes. On the other hand, the concentrations of Ca in erythrocytes and plasma were markedly reduced by the Ficoll separation, to some extent also Li, Co, Cu, and U. The reduced concentrations were probably due to EDTA, a chelator present in the Ficoll medium. Arsenic concentrations seemed to be lowered by Ficoll, probably in a species-specific manner. The concentrations of Mg, P, S, K, Fe, Zn, Se, Rb, and Cs were not affected in the erythrocytes, but decreased in plasma. Concentrations of Mn, Cd, and Pb were not affected in erythrocytes, but in plasma affected by EDTA and/or pre-analytical contamination. Ficoll separation changed the concentrations of Li, Ca, Co, Cu, As, Mo, I, Ba, and U in erythrocytes and blood plasma, Sr in erythrocytes, and Mg, P, S, K, Fe, Zn, Se, Rb and Cs in blood plasma, to an extent that will invalidate evaluation of deficiencies or excess intakes. Copyright © 2014 Elsevier GmbH. All rights reserved.

  8. Cost-effectiveness of annual versus biennial screening mammography for women with high mammographic breast density.

    PubMed

    Pataky, Reka; Ismail, Zahra; Coldman, Andrew J; Elwood, Mark; Gelmon, Karen; Hedden, Lindsay; Hislop, Greg; Kan, Lisa; McCoy, Bonnie; Olivotto, Ivo A; Peacock, Stuart

    2014-12-01

    The sensitivity of screening mammography is much lower among women who have dense breast tissue, compared with women who have largely fatty breasts, and they are also at much higher risk of developing the disease. Increasing mammography screening frequency from biennially to annually has been suggested as a policy option to address the elevated risk in this population. The purpose of this study was to assess the cost-effectiveness of annual versus biennial screening mammography among women aged 50-79 with dense breast tissue. A Markov model was constructed based on screening, diagnostic, and treatment pathways for the population-based screening and cancer care programme in British Columbia, Canada. Model probabilities and screening costs were calculated from screening programme data. Costs for breast cancer treatment were calculated from treatment data, and utility values were obtained from the literature. Incremental cost-effectiveness was expressed as cost per quality adjusted life year (QALY), and probabilistic sensitivity analysis was conducted. Compared with biennial screening, annual screening generated an additional 0.0014 QALYs (95% CI: -0.0480-0.0359) at a cost of $819 ($ = Canadian dollars) per patient (95% CI: 506-1185), resulting in an incremental cost effectiveness ratio of $565,912/QALY. Annual screening had a 37.5% probability of being cost-effective at a willingness-to-pay threshold of $100,000/QALY. There is considerable uncertainty about the incremental cost-effectiveness of annual mammography. Further research on the comparative effectiveness of screening strategies for women with high mammographic breast density is warranted, particularly as digital mammography and density measurement become more widespread, before cost-effectiveness can be reevaluated. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  9. Probabilistic Modeling of High-Temperature Material Properties of a 5-Harness 0/90 Sylramic Fiber/ CVI-SiC/ MI-SiC Woven Composite

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.; Tong, Michael; Murthy, P. L. N.; Mital, Subodh

    1998-01-01

    An integrated probabilistic approach has been developed to assess composites for high temperature applications. This approach was used to determine thermal and mechanical properties and their probabilistic distributions of a 5-harness 0/90 Sylramic fiber/CVI-SiC/Mi-SiC woven Ceramic Matrix Composite (CMC) at high temperatures. The purpose of developing this approach was to generate quantitative probabilistic information on this CMC to help complete the evaluation for its potential application for HSCT combustor liner. This approach quantified the influences of uncertainties inherent in constituent properties called primitive variables on selected key response variables of the CMC at 2200 F. The quantitative information is presented in the form of Cumulative Density Functions (CDFs). Probability Density Functions (PDFS) and primitive variable sensitivities on response. Results indicate that the scatters in response variables were reduced by 30-50% when the uncertainties in the primitive variables, which showed the most influence, were reduced by 50%.

  10. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  11. Functional Data Analysis in NTCP Modeling: A New Method to Explore the Radiation Dose-Volume Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benadjaoud, Mohamed Amine, E-mail: mohamedamine.benadjaoud@gustaveroussy.fr; Université Paris sud, Le Kremlin-Bicêtre; Institut Gustave Roussy, Villejuif

    2014-11-01

    Purpose/Objective(s): To describe a novel method to explore radiation dose-volume effects. Functional data analysis is used to investigate the information contained in differential dose-volume histograms. The method is applied to the normal tissue complication probability modeling of rectal bleeding (RB) for patients irradiated in the prostatic bed by 3-dimensional conformal radiation therapy. Methods and Materials: Kernel density estimation was used to estimate the individual probability density functions from each of the 141 rectum differential dose-volume histograms. Functional principal component analysis was performed on the estimated probability density functions to explore the variation modes in the dose distribution. The functional principalmore » components were then tested for association with RB using logistic regression adapted to functional covariates (FLR). For comparison, 3 other normal tissue complication probability models were considered: the Lyman-Kutcher-Burman model, logistic model based on standard dosimetric parameters (LM), and logistic model based on multivariate principal component analysis (PCA). Results: The incidence rate of grade ≥2 RB was 14%. V{sub 65Gy} was the most predictive factor for the LM (P=.058). The best fit for the Lyman-Kutcher-Burman model was obtained with n=0.12, m = 0.17, and TD50 = 72.6 Gy. In PCA and FLR, the components that describe the interdependence between the relative volumes exposed at intermediate and high doses were the most correlated to the complication. The FLR parameter function leads to a better understanding of the volume effect by including the treatment specificity in the delivered mechanistic information. For RB grade ≥2, patients with advanced age are significantly at risk (odds ratio, 1.123; 95% confidence interval, 1.03-1.22), and the fits of the LM, PCA, and functional principal component analysis models are significantly improved by including this clinical factor. Conclusion: Functional data analysis provides an attractive method for flexibly estimating the dose-volume effect for normal tissues in external radiation therapy.« less

  12. Quantum and classical dynamics of water dissociation on Ni(111): A test of the site-averaging model in dissociative chemisorption of polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bin; Department of Chemical Physics, University of Science and Technology of China, Hefei 230026; Guo, Hua, E-mail: hguo@unm.edu

    Recently, we reported the first highly accurate nine-dimensional global potential energy surface (PES) for water interacting with a rigid Ni(111) surface, built on a large number of density functional theory points [B. Jiang and H. Guo, Phys. Rev. Lett. 114, 166101 (2015)]. Here, we investigate site-specific reaction probabilities on this PES using a quasi-seven-dimensional quantum dynamical model. It is shown that the site-specific reactivity is largely controlled by the topography of the PES instead of the barrier height alone, underscoring the importance of multidimensional dynamics. In addition, the full-dimensional dissociation probability is estimated by averaging fixed-site reaction probabilities with appropriatemore » weights. To validate this model and gain insights into the dynamics, additional quasi-classical trajectory calculations in both full and reduced dimensions have also been performed and important dynamical factors such as the steering effect are discussed.« less

  13. Plasma and Energetic Particle Behaviors During Asymmetric Magnetic Reconnection at the Magnetopause

    NASA Technical Reports Server (NTRS)

    Lee, S. H.; Zhang, H.; Zong, Q.-G.; Otto, A.; Sibeck, D. G.; Wang, Y.; Glassmeier, K.-H.; Daly, P.W.; Reme, H.

    2014-01-01

    The factors controlling asymmetric reconnection and the role of the cold plasma population in the reconnection process are two outstanding questions. We present a case study of multipoint Cluster observations demonstrating that the separatrix and flow boundary angles are greater on the magnetosheath than on the magnetospheric side of the magnetopause, probably due to the stronger density than magnetic field asymmetry at this boundary. The motion of cold plasmaspheric ions entering the reconnection region differs from that of warmer magnetosheath and magnetospheric ions. In contrast to the warmer ions, which are probably accelerated by reconnection in the diffusion region near the subsolar magnetopause, the colder ions are simply entrained by ??×?? drifts at high latitudes on the recently reconnected magnetic field lines. This indicates that plasmaspheric ions can sometimes play only a very limited role in asymmetric reconnection, in contrast to previous simulation studies. Three cold ion populations (probably H+, He+, and O+) appear in the energy spectrum, consistent with ion acceleration to a common velocity.

  14. Density PDFs of diffuse gas in the Milky Way

    NASA Astrophysics Data System (ADS)

    Berkhuijsen, E. M.; Fletcher, A.

    2012-09-01

    The probability distribution functions (PDFs) of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5∘ and |b|≥ 5∘ are considered separately. Our results provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.

  15. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  16. Eruption dynamics of Hawaiian-style fountains: The case study of episode 1 of the Kilauea Iki 1959 eruption

    USGS Publications Warehouse

    Stovall, W.K.; Houghton, Bruce F.; Gonnermann, H.; Fagents, S.A.; Swanson, D.A.

    2011-01-01

    Hawaiian eruptions are characterized by fountains of gas and ejecta, sustained for hours to days that reach tens to hundreds of meters in height. Quantitative analysis of the pyroclastic products from the 1959 eruption of K??lauea Iki, K??lauea volcano, Hawai'i, provides insights into the processes occurring during typical Hawaiian fountaining activity. This short-lived but powerful eruption contained 17 fountaining episodes and produced a cone and tephra blanket as well as a lava lake that interacted with the vent and fountain during all but the first episode of the eruption, the focus of this paper. Microtextural analysis of Hawaiian fountaining products from this opening episode is used to infer vesiculation processes within the fountain and shallow conduit. Vesicle number densities for all clasts are high (106-107 cm-3). Post-fragmentation expansion of bubbles within the thermally-insulated fountain overprints the pre-fragmentation bubble populations, leading to a reduction in vesicle number density and increase in mean vesicle size. However, early quenched rims of some clasts, with vesicle number densities approaching 107 cm-3, are probably a valid approximation to magma conditions near fragmentation. The extent of clast evolution from low vesicle-to-melt ratio and corresponding high vesicle number density to higher vesicle-to-melt ratio and lower vesicle-number density corresponds to the length of residence time within the fountain. ?? 2010 Springer-Verlag.

  17. DLA based compressed sensing for high resolution MR microscopy of neuronal tissue

    NASA Astrophysics Data System (ADS)

    Nguyen, Khieu-Van; Li, Jing-Rebecca; Radecki, Guillaume; Ciobanu, Luisa

    2015-10-01

    In this work we present the implementation of compressed sensing (CS) on a high field preclinical scanner (17.2 T) using an undersampling trajectory based on the diffusion limited aggregation (DLA) random growth model. When applied to a library of images this approach performs better than the traditional undersampling based on the polynomial probability density function. In addition, we show that the method is applicable to imaging live neuronal tissues, allowing significantly shorter acquisition times while maintaining the image quality necessary for identifying the majority of neurons via an automatic cell segmentation algorithm.

  18. Evaluation of lightning accommodation systems for wind-driven turbine rotors

    NASA Technical Reports Server (NTRS)

    Bankaitis, H.

    1982-01-01

    Wind-driven turbine generators are being evaluated as an alternative source of electric energy. Areas of favorable location for the wind-driven turbines (high wind density) coincide with areas of high incidence of thunderstorm activity. These locations, coupled with the 30-m or larger diameter rotor blades, make the wind-driven turbine blades probable terminations for lightning strikes. Several candidate systems of lightning accommodation for composite-structural-material blades were designed and their effectiveness evaluated by submitting the systems to simulated lightning strikes. The test data were analyzed and system design were reviewed on the basis of the analysis.

  19. Search for Point Sources of Ultra-High-Energy Cosmic Rays above 4.0 × 1019 eV Using a Maximum Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.

    2005-04-01

    We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.

  20. Stochastic Geomorphology: A Framework for Creating General Principles on Erosion and Sedimentation in River Basins (Invited)

    NASA Astrophysics Data System (ADS)

    Benda, L. E.

    2009-12-01

    Stochastic geomorphology refers to the interaction of the stochastic field of sediment supply with hierarchically branching river networks where erosion, sediment flux and sediment storage are described by their probability densities. There are a number of general principles (hypotheses) that stem from this conceptual and numerical framework that may inform the science of erosion and sedimentation in river basins. Rainstorms and other perturbations, characterized by probability distributions of event frequency and magnitude, stochastically drive sediment influx to channel networks. The frequency-magnitude distribution of sediment supply that is typically skewed reflects strong interactions among climate, topography, vegetation, and geotechnical controls that vary between regions; the distribution varies systematically with basin area and the spatial pattern of erosion sources. Probability densities of sediment flux and storage evolve from more to less skewed forms downstream in river networks due to the convolution of the population of sediment sources in a watershed that should vary with climate, network patterns, topography, spatial scale, and degree of erosion asynchrony. The sediment flux and storage distributions are also transformed downstream due to diffusion, storage, interference, and attrition. In stochastic systems, the characteristically pulsed sediment supply and transport can create translational or stationary-diffusive valley and channel depositional landforms, the geometries of which are governed by sediment flux-network interactions. Episodic releases of sediment to the network can also drive a system memory reflected in a Hurst Effect in sediment yields and thus in sedimentological records. Similarly, discreet events of punctuated erosion on hillslopes can lead to altered surface and subsurface properties of a population of erosion source areas that can echo through time and affect subsequent erosion and sediment flux rates. Spatial patterns of probability densities have implications for the frequency and magnitude of sediment transport and storage and thus for the formation of alluvial and colluvial landforms throughout watersheds. For instance, the combination and interference of probability densities of sediment flux at confluences creates patterns of riverine heterogeneity, including standing waves of sediment with associated age distributions of deposits that can vary from younger to older depending on network geometry and position. Although the watershed world of probability densities is rarified and typically confined to research endeavors, it has real world implications for the day-to-day work on hillslopes and in fluvial systems, including measuring erosion, sediment transport, mapping channel morphology and aquatic habitats, interpreting deposit stratigraphy, conducting channel restoration, and applying environmental regulations. A question for the geomorphology community is whether the stochastic framework is useful for advancing our understanding of erosion and sedimentation and whether it should stimulate research to further develop, refine and test these and other principles. For example, a changing climate should lead to shifts in probability densities of erosion, sediment flux, storage, and associated habitats and thus provide a useful index of climate change in earth science forecast models.

  1. Modeling turbulent/chemistry interactions using assumed pdf methods

    NASA Technical Reports Server (NTRS)

    Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.

    1992-01-01

    Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.

  2. NDE of structural ceramics

    NASA Technical Reports Server (NTRS)

    Klima, S. J.; Vary, A.

    1986-01-01

    Radiographic, ultrasonic, scanning laser acoustic microscopy (SLAM), and thermo-acoustic microscopy techniques were used to characterize silicon nitride and silicon carbide modulus-of-rupture test specimens in various stages of fabrication. Conventional and microfocus X-ray techniques were found capable of detecting minute high density inclusions in as-received powders, green compacts, and fully densified specimens. Significant density gradients in sintered bars were observed by radiography, ultrasonic velocity, and SLAM. Ultrasonic attenuation was found sensitive to microstructural variations due to grain and void morphology and distribution. SLAM was also capable of detecting voids, inclusions and cracks in finished test bars. Consideration is given to the potential for applying thermo-acoustic microscopy techniques to green and densified ceramics. The detection probability statistics and some limitations of radiography and SLAM also are discussed.

  3. First Volcanological-Probabilistic Pyroclastic Density Current and Fallout Hazard Map for Campi Flegrei and Somma Vesuvius Volcanoes.

    NASA Astrophysics Data System (ADS)

    Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.

    2005-05-01

    Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.

  4. Coulomb Impurity Potential RbCl Quantum Pseudodot Qubit

    NASA Astrophysics Data System (ADS)

    Ma, Xin-Jun; Qi, Bin; Xiao, Jing-Lin

    2015-08-01

    By employing a variational method of Pekar type, we study the eigenenergies and the corresponding eigenfunctions of the ground and the first-excited states of an electron strongly coupled to electron-LO in a RbCl quantum pseudodot (QPD) with a hydrogen-like impurity at the center. This QPD system may be used as a two-level quantum qubit. The expressions of electron's probability density versus time and the coordinates, and the oscillating period versus the Coulombic impurity potential and the polaron radius have been derived. The investigated results indicate ① that the probability density of the electron oscillates in the QPD with a certain oscillating period of , ② that due to the presence of the asymmetrical potential in the z direction of the RbCl QPD, the electron probability density shows double-peak configuration, whereas there is only one peak if the confinement is a two-dimensional symmetric structure in the xy plane of the QPD, ③ that the oscillation period is a decreasing function of the Coulombic impurity potential, whereas it is an increasing one of the polaron radius.

  5. Hunting high and low: disentangling primordial and late-time non-Gaussianity with cosmic densities in spheres

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.

    2018-03-01

    Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.

  6. Bluegill growth as modified by plant density: an exploration of underlying mechanisms

    USGS Publications Warehouse

    Savino, Jacqueline F.; Marschall, Elizabeth A.; Stein, Roy A.

    1992-01-01

    Bluegill (Lepomis macrochira) growth varies inconsistently with plant density. In laboratory and field experiments, we explored mechanisms underlying bluegill growth as a function of plant and invertebrate density. In the laboratory, bluegills captured more chironomids (Chironomus riparius) than damselflies (Enallagma spp. and Ischnura spp.), but energy intake per time spent searching did not differ between damselfly and chironomid treatments. From laboratory data, we described prey encounter rates as functions of plant and invertebrate density. In Clark Lake, Ohio, we created 0.05-ha mesocosms of inshore vegetation to generate macrophyte densities of 125, 270, and 385 stems/m2 of Potamogeton and Ceratophyllum and added 46-mm bluegill (1/m2). In these mesocosms, invertebrate density increased as a function of macrophyte density. Combining this function with encounter rate functions derived from laboratory data, we predicted that bluegill growth should peak at a high macrophyte density, greater than 1000 stems/m2, even though growth should change only slightly beyond 100 stems/m2. Consistent with our predictions, bluegills did not grow differentially, nor did their use of different prey taxa differ, across macrophyte densities in the field. Bluegills preferred chironomid pupae, which were relatively few in numbers but vulnerable to predation, whereas more cryptic, chironomid larvae, which were associated with vegetation but were relatively abundant, were eaten as encountered. Bluegill avoided physid snails. Contrary to previous work, vegetation did not influence growth or diet of bluegill beyond relatively low densities owing to the interaction between capture probabilities and macroinvertebrate densities.

  7. A Riemannian framework for orientation distribution function computing.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2009-01-01

    Compared with Diffusion Tensor Imaging (DTI), High Angular Resolution Imaging (HARDI) can better explore the complex microstructure of white matter. Orientation Distribution Function (ODF) is used to describe the probability of the fiber direction. Fisher information metric has been constructed for probability density family in Information Geometry theory and it has been successfully applied for tensor computing in DTI. In this paper, we present a state of the art Riemannian framework for ODF computing based on Information Geometry and sparse representation of orthonormal bases. In this Riemannian framework, the exponential map, logarithmic map and geodesic have closed forms. And the weighted Frechet mean exists uniquely on this manifold. We also propose a novel scalar measurement, named Geometric Anisotropy (GA), which is the Riemannian geodesic distance between the ODF and the isotropic ODF. The Renyi entropy H1/2 of the ODF can be computed from the GA. Moreover, we present an Affine-Euclidean framework and a Log-Euclidean framework so that we can work in an Euclidean space. As an application, Lagrange interpolation on ODF field is proposed based on weighted Frechet mean. We validate our methods on synthetic and real data experiments. Compared with existing Riemannian frameworks on ODF, our framework is model-free. The estimation of the parameters, i.e. Riemannian coordinates, is robust and linear. Moreover it should be noted that our theoretical results can be used for any probability density function (PDF) under an orthonormal basis representation.

  8. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  9. Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions

    DTIC Science & Technology

    2009-03-01

    United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis

  10. Presence-nonpresence surveys of golden-cheeked warblers: detection, occupancy and survey effort

    USGS Publications Warehouse

    Watson, C.A.; Weckerly, F.W.; Hatfield, J.S.; Farquhar, C.C.; Williamson, P.S.

    2008-01-01

    Surveys to detect the presence or absence of endangered species may not consistently cover an area, account for imperfect detection or consider that detection and species presence at sample units may change within a survey season. We evaluated a detection?nondetection survey method for the federally endangered golden-cheeked warbler (GCWA) Dendroica chrysoparia. Three study areas were selected across the breeding range of GCWA in central Texas. Within each area, 28-36 detection stations were placed 200 m apart. Each detection station was surveyed nine times during the breeding season in 2 consecutive years. Surveyors remained up to 8 min at each detection station recording GCWA detected by sight or sound. To assess the potential influence of environmental covariates (e.g. slope, aspect, canopy cover, study area) on detection and occupancy and possible changes in occupancy and detection probabilities within breeding seasons, 30 models were analyzed. Using information-theoretic model selection procedures, we found that detection probabilities and occupancy varied among study areas and within breeding seasons. Detection probabilities ranged from 0.20 to 0.80 and occupancy ranged from 0.56 to 0.95. Because study areas with high detection probabilities had high occupancy, a conservative survey effort (erred towards too much surveying) was estimated using the lowest detection probability. We determined that nine surveys of 35 stations were needed to have estimates of occupancy with coefficients of variation <20%. Our survey evaluation evidently captured the key environmental variable that influenced bird detection (GCWA density) and accommodated the changes in GCWA distribution throughout the breeding season.

  11. Summary of intrinsic and extrinsic factors affecting detection probability of marsh birds

    USGS Publications Warehouse

    Conway, C.J.; Gibbs, J.P.

    2011-01-01

    Many species of marsh birds (rails, bitterns, grebes, etc.) rely exclusively on emergent marsh vegetation for all phases of their life cycle, and many organizations have become concerned about the status and persistence of this group of birds. Yet, marsh birds are notoriously difficult to monitor due to their secretive habits. We synthesized the published and unpublished literature and summarized the factors that influence detection probability of secretive marsh birds in North America. Marsh birds are more likely to respond to conspecific than heterospecific calls, and seasonal peak in vocalization probability varies among co-existing species. The effectiveness of morning versus evening surveys varies among species and locations. Vocalization probability appears to be positively correlated with density in breeding Virginia Rails (Rallus limicola), Soras (Porzana carolina), and Clapper Rails (Rallus longirostris). Movement of birds toward the broadcast source creates biases when using count data from callbroadcast surveys to estimate population density. Ambient temperature, wind speed, cloud cover, and moon phase affected detection probability in some, but not all, studies. Better estimates of detection probability are needed. We provide recommendations that would help improve future marsh bird survey efforts and a list of 14 priority information and research needs that represent gaps in our current knowledge where future resources are best directed. ?? Society of Wetland Scientists 2011.

  12. Probability density function formalism for optical coherence tomography signal analysis: a controlled phantom study.

    PubMed

    Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex

    2016-06-15

    The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.

  13. Single-molecule stochastic times in a reversible bimolecular reaction

    NASA Astrophysics Data System (ADS)

    Keller, Peter; Valleriani, Angelo

    2012-08-01

    In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.

  14. Probability density function approach for compressible turbulent reacting flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.

    1994-01-01

    The objective of the present work is to extend the probability density function (PDF) tubulence model to compressible reacting flows. The proability density function of the species mass fractions and enthalpy are obtained by solving a PDF evolution equation using a Monte Carlo scheme. The PDF solution procedure is coupled with a compression finite-volume flow solver which provides the velocity and pressure fields. A modeled PDF equation for compressible flows, capable of treating flows with shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed. Two super sonic diffusion flames are studied using the proposed PDF model and the results are compared with experimental data; marked improvements over solutions without PDF are observed.

  15. Thermal nanostructure: An order parameter multiscale ensemble approach

    NASA Astrophysics Data System (ADS)

    Cheluvaraja, S.; Ortoleva, P.

    2010-02-01

    Deductive all-atom multiscale techniques imply that many nanosystems can be understood in terms of the slow dynamics of order parameters that coevolve with the quasiequilibrium probability density for rapidly fluctuating atomic configurations. The result of this multiscale analysis is a set of stochastic equations for the order parameters whose dynamics is driven by thermal-average forces. We present an efficient algorithm for sampling atomistic configurations in viruses and other supramillion atom nanosystems. This algorithm allows for sampling of a wide range of configurations without creating an excess of high-energy, improbable ones. It is implemented and used to calculate thermal-average forces. These forces are then used to search the free-energy landscape of a nanosystem for deep minima. The methodology is applied to thermal structures of Cowpea chlorotic mottle virus capsid. The method has wide applicability to other nanosystems whose properties are described by the CHARMM or other interatomic force field. Our implementation, denoted SIMNANOWORLD™, achieves calibration-free nanosystem modeling. Essential atomic-scale detail is preserved via a quasiequilibrium probability density while overall character is provided via predicted values of order parameters. Applications from virology to the computer-aided design of nanocapsules for delivery of therapeutic agents and of vaccines for nonenveloped viruses are envisioned.

  16. Evaluation of joint probability density function models for turbulent nonpremixed combustion with complex chemistry

    NASA Technical Reports Server (NTRS)

    Smith, N. S. A.; Frolov, S. M.; Bowman, C. T.

    1996-01-01

    Two types of mixing sub-models are evaluated in connection with a joint-scalar probability density function method for turbulent nonpremixed combustion. Model calculations are made and compared to simulation results for homogeneously distributed methane-air reaction zones mixing and reacting in decaying turbulence within a two-dimensional enclosed domain. The comparison is arranged to ensure that both the simulation and model calculations a) make use of exactly the same chemical mechanism, b) do not involve non-unity Lewis number transport of species, and c) are free from radiation loss. The modified Curl mixing sub-model was found to provide superior predictive accuracy over the simple relaxation-to-mean submodel in the case studied. Accuracy to within 10-20% was found for global means of major species and temperature; however, nitric oxide prediction accuracy was lower and highly dependent on the choice of mixing sub-model. Both mixing submodels were found to produce non-physical mixing behavior for mixture fractions removed from the immediate reaction zone. A suggestion for a further modified Curl mixing sub-model is made in connection with earlier work done in the field.

  17. Predicting electroporation of cells in an inhomogeneous electric field based on mathematical modeling and experimental CHO-cell permeabilization to propidium iodide determination.

    PubMed

    Dermol, Janja; Miklavčič, Damijan

    2014-12-01

    High voltage electric pulses cause electroporation of the cell membrane. Consequently, flow of the molecules across the membrane increases. In our study we investigated possibility to predict the percentage of the electroporated cells in an inhomogeneous electric field on the basis of the experimental results obtained when cells were exposed to a homogeneous electric field. We compared and evaluated different mathematical models previously suggested by other authors for interpolation of the results (symmetric sigmoid, asymmetric sigmoid, hyperbolic tangent and Gompertz curve). We investigated the density of the cells and observed that it has the most significant effect on the electroporation of the cells while all four of the mathematical models yielded similar results. We were able to predict electroporation of cells exposed to an inhomogeneous electric field based on mathematical modeling and using mathematical formulations of electroporation probability obtained experimentally using exposure to the homogeneous field of the same density of cells. Models describing cell electroporation probability can be useful for development and presentation of treatment planning for electrochemotherapy and non-thermal irreversible electroporation. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface

    NASA Astrophysics Data System (ADS)

    Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai

    To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.

  19. Probability density function of a puff dispersing from the wall of a turbulent channel

    NASA Astrophysics Data System (ADS)

    Nguyen, Quoc; Papavassiliou, Dimitrios

    2015-11-01

    Study of dispersion of passive contaminants in turbulence has proved to be helpful in understanding fundamental heat and mass transfer phenomena. Many simulation and experimental works have been carried out to locate and track motions of scalar markers in a flow. One method is to combine Direct Numerical Simulation (DNS) and Lagrangian Scalar Tracking (LST) to record locations of markers. While this has proved to be useful, high computational cost remains a concern. In this study, we develop a model that could reproduce results obtained by DNS and LST for turbulent flow. Puffs of markers with different Schmidt numbers were released into a flow field at a frictional Reynolds number of 150. The point of release was at the channel wall, so that both diffusion and convection contribute to the puff dispersion pattern, defining different stages of dispersion. Based on outputs from DNS and LST, we seek the most suitable and feasible probability density function (PDF) that represents distribution of markers in the flow field. The PDF would play a significant role in predicting heat and mass transfer in wall turbulence, and would prove to be helpful where DNS and LST are not always available.

  20. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  1. Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    PubMed

    Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A

    2014-04-01

    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.

  2. A Large-Scale Super-Structure at z=0.65 in the UKIDSS Ultra-Deep Survey Field

    NASA Astrophysics Data System (ADS)

    Galametz, Audrey; Candels Clustering Working Group

    2017-07-01

    In hierarchical structure formation scenarios, galaxies accrete along high density filaments. Superclusters represent the largest density enhancements in the cosmic web with scales of 100 to 200 Mpc. As they represent the largest components of LSS, they are very powerful tools to constrain cosmological models. Since they also offer a wide range of density, from infalling group to high density cluster core, they are also the perfect laboratory to study the influence of environment on galaxy evolution. I will present a newly discovered large scale structure at z=0.65 in the UKIDSS UDS field. Although statistically predicted, the presence of such structure in UKIDSS, one of the most extensively covered and studied extragalactic field, remains a serendipity. Our follow-up confirmed more than 15 group members including at least three galaxy clusters with M200 10^14Msol . Deep spectroscopy of the quiescent core galaxies reveals that the most massive structure knots are at very different formation stage with a range of red sequence properties. Statistics allow us to map formation age across the structure denser knots and identify where quenching is most probably occurring across the LSS. Spectral diagnostics analysis also reveals an interesting population of transition galaxies we suspect are transforming from star-forming to quiescent galaxies.

  3. Electrostatic-probe measurements of plasma parameters for two reentry flight experiments at 25000 feet per second

    NASA Technical Reports Server (NTRS)

    Jones, W. L., Jr.; Cross, A. E.

    1972-01-01

    Unique plasma diagnostic measurements at high altitudes from two geometrically similar blunt body reentry spacecraft using electrostatic probe rakes are presented. The probes measured the positive ion density profiles (shape and magnitude) during the two flights. The probe measurements were made at eight discrete points (1 cm to 7 cm) from the vehicle surface in the aft flow field of the spacecraft over the altitude range of 85.3 to 53.3 km (280,000 to 175,000 ft) with measured densities of 10 to the 8th power to 10 to the 12th power electrons/cu cm, respectively. Maximum reentry velocity for each spacecraft was approximately 7620 meters/second (25,000 ft/sec). In the first flight experiment, water was periodically injected into a flow field which was contaminated by ablation products from the spacecraft nose region. The nonablative nose of the second spacecraft thereby minimized flow field contamination. Comparisons of the probe measured density profiles with theoretical calculations are presented with discussion as to the probable cause of significant disagreement. Also discussed are the correlation of probe measurements with vehicle angle of attack motions and the good high altitude agreement between electron densities inferred from the probe measurements, VHF antenna measurements, and microwave reflectometer diagnostic measurements.

  4. Buoyancy-driven melt segregation in the earth's moon. I - Numerical results

    NASA Technical Reports Server (NTRS)

    Delano, J. W.

    1990-01-01

    The densities of lunar mare magmas have been estimated at liquidus temperatures for pressures from 0 to 47 kbar (0.4 GPa; center of the moon) using a third-order Birch-Murnaghan equation and compositionally dependent parameters from Large and Carmichael (1987). Results on primary magmatic compositions represented by pristine volcanic glasses suggest that the density contrast between very-high-Ti melts and their liquidus olivines may approach zero at pressures of about 25 kbar (2.5 GPa). Since this is the pressure regime of the mantle source regions for these magmas, a compositional limit of eruptability for mare liquids may exist that is similar to the highest Ti melt yet observed among the lunar samples. Although the moon may have generated magmas having greater than 16.4 wt pct TiO2, those melts would probably not have reached the lunar surface due to their high densities, and may have even sunk deeper into the moon's interior as negatively buoyant diapirs. This process may have been important for assimilative interactions in the lunar mantle. The phenomenon of melt/solid density crossover may therefore occur not only in large terrestrial-type objects but also in small objects where, despite low pressures, the range of melt compositions is extreme.

  5. Composition and structure of the Chironomidae (Insecta: Diptera) community associated with bryophytes in a first-order stream in the Atlantic forest, Brazil.

    PubMed

    Rosa, B F J V; Dias-Silva, M V D; Alves, R G

    2013-02-01

    This study describes the structure of the Chironomidae community associated with bryophytes in a first-order stream located in a biological reserve of the Atlantic Forest, during two seasons. Samples of bryophytes adhered to rocks along a 100-m stretch of the stream were removed with a metal blade, and 200-mL pots were filled with the samples. The numerical density (individuals per gram of dry weight), Shannon's diversity index, Pielou's evenness index, the dominance index (DI), and estimated richness were calculated for each collection period (dry and rainy). Linear regression analysis was employed to test the existence of a correlation between rainfall and the individual's density and richness. The high numerical density and richness of Chironomidae taxa observed are probably related to the peculiar conditions of the bryophyte habitat. The retention of larvae during periods of higher rainfall contributed to the high density and richness of Chironomidae larvae. The rarefaction analysis showed higher richness in the rainy season related to the greater retention of food particles. The data from this study show that bryophytes provide stable habitats for the colonization by and refuge of Chironomidae larvae, mainly under conductions of faster water flow and higher precipitation.

  6. The Most Massive Galaxies and Black Holes Allowed by ΛCDM

    NASA Astrophysics Data System (ADS)

    Behroozi, Peter; Silk, Joseph

    2018-04-01

    Given a galaxy's stellar mass, its host halo mass has a lower limit from the cosmic baryon fraction and known baryonic physics. At z > 4, galaxy stellar mass functions place lower limits on halo number densities that approach expected ΛCDM halo mass functions. High-redshift galaxy stellar mass functions can thus place interesting limits on number densities of massive haloes, which are otherwise very difficult to measure. Although halo mass functions at z < 8 are consistent with observed galaxy stellar masses if galaxy baryonic conversion efficiencies increase with redshift, JWST and WFIRST will more than double the redshift range over which useful constraints are available. We calculate maximum galaxy stellar masses as a function of redshift given expected halo number densities from ΛCDM. We apply similar arguments to black holes. If their virial mass estimates are accurate, number density constraints alone suggest that the quasars SDSS J1044-0125 and SDSS J010013.02+280225.8 likely have black hole mass — stellar mass ratios higher than the median z = 0 relation, confirming the expectation from Lauer bias. Finally, we present a public code to evaluate the probability of an apparently ΛCDM-inconsistent high-mass halo being detected given the combined effects of multiple surveys and observational errors.

  7. Life Depends upon Two Kinds of Water

    PubMed Central

    Wiggins, Philippa

    2008-01-01

    Background Many well-documented biochemical processes lack a molecular mechanism. Examples are: how ATP hydrolysis and an enzyme contrive to perform work, such as active transport; how peptides are formed from amino acids and DNA from nucleotides; how proteases cleave peptide bonds, how bone mineralises; how enzymes distinguish between sodium and potassium; how chirality of biopolymers was established prebiotically. Methodology/Principal Findings It is shown that involvement of water in all these processes is mandatory, but the water must be of the simplified configuration in which there are only two strengths of water-water hydrogen bonds, and in which these two types of water coexist as microdomains throughout the liquid temperature range. Since they have different strengths of hydrogen bonds, the microdomains differ in all their physical and chemical properties. Solutes partition asymmetrically, generating osmotic pressure gradients which must be compensated for or abolished. Displacement of the equilibrium between high and low density waters incurs a thermodynamic cost which limits solubility, depresses ionisation of water, drives protein folding and prevents high density water from boiling at its intrinsic boiling point which appears to be below 0°C. Active processes in biochemistry take place in sequential partial reactions, most of which release small amounts of free energy as heat. This ensures that the system is never far from equilibrium so that efficiency is extremely high. Energy transduction is neither possible and nor necessary. Chirality was probably established in prebiotic clays which must have carried stable populations of high density and low density water domains. Bioactive enantiomorphs partition into low density water in which they polymerise spontaneously. Conclusions/Significance The simplified model of water has great explanatory power. PMID:18183287

  8. Large Scale Data Analysis and Knowledge Extraction in Communication Data

    DTIC Science & Technology

    2017-03-31

    this purpose, we developed a novel method the " Correlation Density Ran!C’ which finds probability density distribution of related frequent event on all...which is called " Correlation Density Rank", is developed to derive the community tree from the network. As in the real world, where a network is...Community Structure in Dynamic Social Networks using the Correlation Density Rank," 2014 ASE BigData/SocialCom/Cybersecurity Conference, Stanford

  9. Continuation of probability density functions using a generalized Lyapunov approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.

  10. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  11. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  12. Exact joint density-current probability function for the asymmetric exclusion process.

    PubMed

    Depken, Martin; Stinchcombe, Robin

    2004-07-23

    We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society

  13. The lunar interior

    NASA Technical Reports Server (NTRS)

    Anderson, D. L.; Kovach, R. L.

    1972-01-01

    The compressional velocities are estimated for materials in the lunar interior and compared with lunar seismic results. The lower crust has velocities appropriate for basalts or anorthosites. The high velocities associated with the uppermost mantle imply high densities and a change in composition to a lighter assemblage at depths of the order of 120 km. Calcium and aluminum are probably important components of the upper mantle and are deficient in the lower mantle. Much of the moon may have accreted from material similar in composition to eucrites. The important mineral of the upper mantle is garnet; possible accessory minerals are kyanite, spinel, and rutile. If the seismic results stand up, the high velocity layer in the moon is more likely to be a high pressure form of anorthosite than eclogite, pyroxenite, or dunite. The thickness of the layer is of the order of 50 km. Cosmic abundances can be maintained if the lower mantle is ferromagnesium silicate with minimal amounts of calcium and aluminum. Achondrites such as eucrites and howardites have more of the required characteristics of the lunar interior than carbonaceous chondrites. A density inversion in the moon is a strong possibility.

  14. Multivariate Density Estimation and Remote Sensing

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1983-01-01

    Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.

  15. Hybrid neural network for density limit disruption prediction and avoidance on J-TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Zheng, W.; Hu, F. R.; Zhang, M.; Chen, Z. Y.; Zhao, X. Q.; Wang, X. L.; Shi, P.; Zhang, X. L.; Zhang, X. Q.; Zhou, Y. N.; Wei, Y. N.; Pan, Y.; J-TEXT team

    2018-05-01

    Increasing the plasma density is one of the key methods in achieving an efficient fusion reaction. High-density operation is one of the hot topics in tokamak plasmas. Density limit disruptions remain an important issue for safe operation. An effective density limit disruption prediction and avoidance system is the key to avoid density limit disruptions for long pulse steady state operations. An artificial neural network has been developed for the prediction of density limit disruptions on the J-TEXT tokamak. The neural network has been improved from a simple multi-layer design to a hybrid two-stage structure. The first stage is a custom network which uses time series diagnostics as inputs to predict plasma density, and the second stage is a three-layer feedforward neural network to predict the probability of density limit disruptions. It is found that hybrid neural network structure, combined with radiation profile information as an input can significantly improve the prediction performance, especially the average warning time ({{T}warn} ). In particular, the {{T}warn} is eight times better than that in previous work (Wang et al 2016 Plasma Phys. Control. Fusion 58 055014) (from 5 ms to 40 ms). The success rate for density limit disruptive shots is above 90%, while, the false alarm rate for other shots is below 10%. Based on the density limit disruption prediction system and the real-time density feedback control system, the on-line density limit disruption avoidance system has been implemented on the J-TEXT tokamak.

  16. A Balanced Approach to Adaptive Probability Density Estimation.

    PubMed

    Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy

    2017-01-01

    Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.

  17. Adaptive detection of noise signal according to Neumann-Pearson criterion

    NASA Astrophysics Data System (ADS)

    Padiryakov, Y. A.

    1985-03-01

    Optimum detection according to the Neumann-Pearson criterion is considered in the case of a random Gaussian noise signal, stationary during measurement, and a stationary random Gaussian background interference. Detection is based on two samples, their statistics characterized by estimates of their spectral densities, it being a priori known that sample A from the signal channel is either the sum of signal and interference or interference alone and sample B from the reference interference channel is an interference with the same spectral density as that of the interference in sample A for both hypotheses. The probability of correct detection is maximized on the average, first in the 2N-dimensional space of signal spectral density and interference spectral density readings, by fixing the probability of false alarm at each point so as to stabilize it at a constant level against variation of the interference spectral density. Deterministic decision rules are established. The algorithm is then reduced to equivalent detection in the N-dimensional space of the ratio of sample A readings to sample B readings.

  18. Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density

    DOE PAGES

    Smallwood, David O.

    1997-01-01

    The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less

  19. Modelling detectability of kiore (Rattus exulans) on Aguiguan, Mariana Islands, to inform possible eradication and monitoring efforts

    USGS Publications Warehouse

    Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.

    2011-01-01

    Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.

  20. Experimental Investigation of Muon-Catalyzed d-t Fusion

    NASA Astrophysics Data System (ADS)

    Jones, S. E.; Anderson, A. N.; Caffrey, A. J.; Walter, J. B.; Watts, K. D.; Bradbury, J. N.; Gram, P. A. M.; Leon, M.; Maltrud, H. R.; Paciotti, M. A.

    1983-11-01

    Measurements of the absolute neutron yield and the time dependence of the appearance of neutrons resulting from muon-catalyzed fusion have been carried out in high-density deuterium-tritium mixtures. The temperature dependence of the resonant dtμ-molecular formation process has been determined in the range 100 to 540 K. Mesomolecular formation is found to be resonant for DT as well as D2 target molecules. The sticking probability and other fundamental parameters have been measured for the first time.

Top