Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zhang; Chen, Wei
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Jiang, Zhang; Chen, Wei
2017-11-03
Generalized skew-symmetric probability density functions are proposed to model asymmetric interfacial density distributions for the parameterization of any arbitrary density profiles in the `effective-density model'. The penetration of the densities into adjacent layers can be selectively controlled and parameterized. A continuous density profile is generated and discretized into many independent slices of very thin thickness with constant density values and sharp interfaces. The discretized profile can be used to calculate reflectivities via Parratt's recursive formula, or small-angle scattering via the concentric onion model that is also developed in this work.
Eulerian Mapping Closure Approach for Probability Density Function of Concentration in Shear Flows
NASA Technical Reports Server (NTRS)
He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The Eulerian mapping closure approach is developed for uncertainty propagation in computational fluid mechanics. The approach is used to study the Probability Density Function (PDF) for the concentration of species advected by a random shear flow. An analytical argument shows that fluctuation of the concentration field at one point in space is non-Gaussian and exhibits stretched exponential form. An Eulerian mapping approach provides an appropriate approximation to both convection and diffusion terms and leads to a closed mapping equation. The results obtained describe the evolution of the initial Gaussian field, which is in agreement with direct numerical simulations.
Estimating the Probability of Elevated Nitrate Concentrations in Ground Water in Washington State
Frans, Lonna M.
2008-01-01
Logistic regression was used to relate anthropogenic (manmade) and natural variables to the occurrence of elevated nitrate concentrations in ground water in Washington State. Variables that were analyzed included well depth, ground-water recharge rate, precipitation, population density, fertilizer application amounts, soil characteristics, hydrogeomorphic regions, and land-use types. Two models were developed: one with and one without the hydrogeomorphic regions variable. The variables in both models that best explained the occurrence of elevated nitrate concentrations (defined as concentrations of nitrite plus nitrate as nitrogen greater than 2 milligrams per liter) were the percentage of agricultural land use in a 4-kilometer radius of a well, population density, precipitation, soil drainage class, and well depth. Based on the relations between these variables and measured nitrate concentrations, logistic regression models were developed to estimate the probability of nitrate concentrations in ground water exceeding 2 milligrams per liter. Maps of Washington State were produced that illustrate these estimated probabilities for wells drilled to 145 feet below land surface (median well depth) and the estimated depth to which wells would need to be drilled to have a 90-percent probability of drawing water with a nitrate concentration less than 2 milligrams per liter. Maps showing the estimated probability of elevated nitrate concentrations indicated that the agricultural regions are most at risk followed by urban areas. The estimated depths to which wells would need to be drilled to have a 90-percent probability of obtaining water with nitrate concentrations less than 2 milligrams per liter exceeded 1,000 feet in the agricultural regions; whereas, wells in urban areas generally would need to be drilled to depths in excess of 400 feet.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
Density Fluctuation in Aqueous Solutions and Molecular Origin of Salting-Out Effect for CO 2
Ho, Tuan Anh; Ilgen, Anastasia
2017-10-26
Using molecular dynamics simulation, we studied the density fluctuations and cavity formation probabilities in aqueous solutions and their effect on the hydration of CO 2. With increasing salt concentration, we report an increased probability of observing a larger than the average number of species in the probe volume. Our energetic analyses indicate that the van der Waals and electrostatic interactions between CO 2 and aqueous solutions become more favorable with increasing salt concentration, favoring the solubility of CO 2 (salting in). However, due to the decreasing number of cavities forming when salt concentration is increased, the solubility of CO 2more » decreases. The formation of cavities was found to be the primary control on the dissolution of gas, and is responsible for the observed CO 2 salting-out effect. Finally, our results provide the fundamental understanding of the density fluctuation in aqueous solutions and the molecular origin of the salting-out effect for real gas.« less
Density Fluctuation in Aqueous Solutions and Molecular Origin of Salting-Out Effect for CO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Tuan Anh; Ilgen, Anastasia
Using molecular dynamics simulation, we studied the density fluctuations and cavity formation probabilities in aqueous solutions and their effect on the hydration of CO 2. With increasing salt concentration, we report an increased probability of observing a larger than the average number of species in the probe volume. Our energetic analyses indicate that the van der Waals and electrostatic interactions between CO 2 and aqueous solutions become more favorable with increasing salt concentration, favoring the solubility of CO 2 (salting in). However, due to the decreasing number of cavities forming when salt concentration is increased, the solubility of CO 2more » decreases. The formation of cavities was found to be the primary control on the dissolution of gas, and is responsible for the observed CO 2 salting-out effect. Finally, our results provide the fundamental understanding of the density fluctuation in aqueous solutions and the molecular origin of the salting-out effect for real gas.« less
Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.
Guo, Lian; Radisic, Aleksandar; Searson, Peter C
2005-12-22
Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Horn, J. E.; Harter, T.
2011-06-01
Onsite wastewater treatment systems such as septic systems are common in rural and semi-rural areas around the world; in the US, about 25-30 % of households are served by a septic system and a private drinking water well. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. Particularly in areas with small lots, thus a high septic system density, these typically shallow wells are prone to contamination by septic system leachate. Typically, mass balance approaches are used to determine a maximum septic system density that would prevent contamination of the aquifer. In this study, we estimate the probability of a well pumping partially septic system leachate. A detailed groundwater and transport model is used to calculate the capture zone of a typical drinking water well. A spatial probability analysis is performed to assess the probability that a capture zone overlaps with a septic system drainfield depending on aquifer properties, lot and drainfield size. We show that a high septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We conclude that mass balances calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances which experience limited attenuation, and those being harmful even in low concentrations.
Compositional cokriging for mapping the probability risk of groundwater contamination by nitrates.
Pardo-Igúzquiza, Eulogio; Chica-Olmo, Mario; Luque-Espinar, Juan A; Rodríguez-Galiano, Víctor
2015-11-01
Contamination by nitrates is an important cause of groundwater pollution and represents a potential risk to human health. Management decisions must be made using probability maps that assess the nitrate concentration potential of exceeding regulatory thresholds. However these maps are obtained with only a small number of sparse monitoring locations where the nitrate concentrations have been measured. It is therefore of great interest to have an efficient methodology for obtaining those probability maps. In this paper, we make use of the fact that the discrete probability density function is a compositional variable. The spatial discrete probability density function is estimated by compositional cokriging. There are several advantages in using this approach: (i) problems of classical indicator cokriging, like estimates outside the interval (0,1) and order relations, are avoided; (ii) secondary variables (e.g. aquifer parameters) can be included in the estimation of the probability maps; (iii) uncertainty maps of the probability maps can be obtained; (iv) finally there are modelling advantages because the variograms and cross-variograms of real variables that do not have the restrictions of indicator variograms and indicator cross-variograms. The methodology was applied to the Vega de Granada aquifer in Southern Spain and the advantages of the compositional cokriging approach were demonstrated. Copyright © 2015 Elsevier B.V. All rights reserved.
Domestic wells have high probability of pumping septic tank leachate
NASA Astrophysics Data System (ADS)
Bremer, J. E.; Harter, T.
2012-08-01
Onsite wastewater treatment systems are common in rural and semi-rural areas around the world; in the US, about 25-30% of households are served by a septic (onsite) wastewater treatment system, and many property owners also operate their own domestic well nearby. Site-specific conditions and local groundwater flow are often ignored when installing septic systems and wells. In areas with small lots (thus high spatial septic system densities), shallow domestic wells are prone to contamination by septic system leachate. Mass balance approaches have been used to determine a maximum septic system density that would prevent contamination of groundwater resources. In this study, a source area model based on detailed groundwater flow and transport modeling is applied for a stochastic analysis of domestic well contamination by septic leachate. Specifically, we determine the probability that a source area overlaps with a septic system drainfield as a function of aquifer properties, septic system density and drainfield size. We show that high spatial septic system density poses a high probability of pumping septic system leachate. The hydraulic conductivity of the aquifer has a strong influence on the intersection probability. We find that mass balance calculations applied on a regional scale underestimate the contamination risk of individual drinking water wells by septic systems. This is particularly relevant for contaminants released at high concentrations, for substances that experience limited attenuation, and those that are harmful even at low concentrations (e.g., pathogens).
NASA Astrophysics Data System (ADS)
Zaichik, Leonid I.; Alipchenkov, Vladimir M.
2007-11-01
The purposes of the paper are threefold: (i) to refine the statistical model of preferential particle concentration in isotropic turbulence that was previously proposed by Zaichik and Alipchenkov [Phys. Fluids 15, 1776 (2003)], (ii) to investigate the effect of clustering of low-inertia particles using the refined model, and (iii) to advance a simple model for predicting the collision rate of aerosol particles. The model developed is based on a kinetic equation for the two-point probability density function of the relative velocity distribution of particle pairs. Improvements in predicting the preferential concentration of low-inertia particles are attained due to refining the description of the turbulent velocity field of the carrier fluid by including a difference between the time scales of the of strain and rotation rate correlations. The refined model results in a better agreement with direct numerical simulations for aerosol particles.
The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed thresh...
[Risk analysis of naphthalene pollution in soils of Tianjin].
Yang, Yu; Shi, Xuan; Xu, Fu-liu; Tao, Shu
2004-03-01
Three approaches were applied and evaluated for probabilistic risk assessment of naphthalene in soils of Tianjin, China, based on the observed naphthalene concentration of 188 top soil samples from the area and LC50 of naphthalene to ten typical soil fauna species from the literature. It was found that the overlapping area of the two probability density functions of concentration and LC50 was 6.4%, the joint probability curve bend towards and very close to the bottom and left axis, and the calculated probability that exposure concentration exceeds LC50 of various species was as low as 1.67%, all indicating a very much acceptable risk of naphthalene to the soil fauna ecosystem and only some of very sensitive species or individual animals are threaten by localized extremely high concentration. The three approaches revealed similar results from different viewpoints.
Population-level effects of the mysid, Americamysis bahia, exposed to varying thiobencarb concentrations were estimated using stage-structured matrix models. A deterministic density-independent matrix model estimated the decrease in population growth rate, l, with increas...
Owman, T
1981-07-01
In the experimental model in the rabbit the excretion of sodium and meglumine diatrizoate, respectively, have been compared. Urographic density which was estimated through renal pelvic volume as calculated according to previous experiments (Owman 1978; Owman & Olin 1980) and urinary iodine concentration, is suggested to be more accurate than mere determination of urine iodine concentration and diuresis when evaluating and comparing urographic contrast media experimentally. More reliable dose optima are probably found when calculating density rather than determining urine concentrations. Of the examined media in this investigation, the sodium salt of diatrizoate was not superior to the meglumine salt in dose ranges up to 320 mg I/kg body weight, while at higher doses sodium diatrizoate gave higher urinary iodine concentrations and higher estimated density.
Lu, Ying; Ahmed, Sultan; Harari, Florencia; Vahter, Marie
2015-01-01
Ficoll density gradient centrifugation is widely used to separate cellular components of human blood. We evaluated the suitability to use erythrocytes and blood plasma obtained from Ficoll centrifugation for assessment of elemental concentrations. We determined 22 elements (from Li to U) in erythrocytes and blood plasma separated by direct or Ficoll density gradient centrifugation, using inductively coupled plasma mass spectrometry. Compared with erythrocytes and blood plasma separated by direct centrifugation, those separated by Ficoll had highly elevated iodine and Ba concentration, due to the contamination from the Ficoll-Paque medium, and about twice as high concentrations of Sr and Mo in erythrocytes. On the other hand, the concentrations of Ca in erythrocytes and plasma were markedly reduced by the Ficoll separation, to some extent also Li, Co, Cu, and U. The reduced concentrations were probably due to EDTA, a chelator present in the Ficoll medium. Arsenic concentrations seemed to be lowered by Ficoll, probably in a species-specific manner. The concentrations of Mg, P, S, K, Fe, Zn, Se, Rb, and Cs were not affected in the erythrocytes, but decreased in plasma. Concentrations of Mn, Cd, and Pb were not affected in erythrocytes, but in plasma affected by EDTA and/or pre-analytical contamination. Ficoll separation changed the concentrations of Li, Ca, Co, Cu, As, Mo, I, Ba, and U in erythrocytes and blood plasma, Sr in erythrocytes, and Mg, P, S, K, Fe, Zn, Se, Rb and Cs in blood plasma, to an extent that will invalidate evaluation of deficiencies or excess intakes. Copyright © 2014 Elsevier GmbH. All rights reserved.
Statistics of cosmic density profiles from perturbation theory
NASA Astrophysics Data System (ADS)
Bernardeau, Francis; Pichon, Christophe; Codis, Sandrine
2014-11-01
The joint probability distribution function (PDF) of the density within multiple concentric spherical cells is considered. It is shown how its cumulant generating function can be obtained at tree order in perturbation theory as the Legendre transform of a function directly built in terms of the initial moments. In the context of the upcoming generation of large-scale structure surveys, it is conjectured that this result correctly models such a function for finite values of the variance. Detailed consequences of this assumption are explored. In particular the corresponding one-cell density probability distribution at finite variance is computed for realistic power spectra, taking into account its scale variation. It is found to be in agreement with Λ -cold dark matter simulations at the few percent level for a wide range of density values and parameters. Related explicit analytic expansions at the low and high density tails are given. The conditional (at fixed density) and marginal probability of the slope—the density difference between adjacent cells—and its fluctuations is also computed from the two-cell joint PDF; it also compares very well to simulations. It is emphasized that this could prove useful when studying the statistical properties of voids as it can serve as a statistical indicator to test gravity models and/or probe key cosmological parameters.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Yavari, R.; Ramaswami, S.; Snipes, J. S.; Yen, C.-F.; Cheeseman, B. A.
2013-11-01
A comprehensive all-atom molecular-level computational investigation is carried out in order to identify and quantify: (i) the effect of prior longitudinal-compressive or axial-torsional loading on the longitudinal-tensile behavior of p-phenylene terephthalamide (PPTA) fibrils/fibers; and (ii) the role various microstructural/topological defects play in affecting this behavior. Experimental and computational results available in the relevant open literature were utilized to construct various defects within the molecular-level model and to assign the concentration to these defects consistent with the values generally encountered under "prototypical" PPTA-polymer synthesis and fiber fabrication conditions. When quantifying the effect of the prior longitudinal-compressive/axial-torsional loading on the longitudinal-tensile behavior of PPTA fibrils, the stochastic nature of the size/potency of these defects was taken into account. The results obtained revealed that: (a) due to the stochastic nature of the defect type, concentration/number density and size/potency, the PPTA fibril/fiber longitudinal-tensile strength is a statistical quantity possessing a characteristic probability density function; (b) application of the prior axial compression or axial torsion to the PPTA imperfect single-crystalline fibrils degrades their longitudinal-tensile strength and only slightly modifies the associated probability density function; and (c) introduction of the fibril/fiber interfaces into the computational analyses showed that prior axial torsion can induce major changes in the material microstructure, causing significant reductions in the PPTA-fiber longitudinal-tensile strength and appreciable changes in the associated probability density function.
Bisphenol S impairs blood functions and induces cardiovascular risks in rats.
Pal, Sanghamitra; Sarkar, Kaushik; Nath, Partha Pratim; Mondal, Mukti; Khatun, Ashma; Paul, Goutam
2017-01-01
Bisphenol S (BPS) is an industrial chemical which is recently used to replace the potentially toxic Bisphenol A (BPA) in making polycarbonate plastics, epoxy resins and thermal receipt papers. The probable toxic effects of BPS on the functions of haemopoietic and cardiovascular systems have not been reported till to date. We report here that BPS depresses haematological functions and induces cardiovascular risks in rat. Adult male albino rats of Sprague-Dawley strain were given BPS at a dose level of 30, 60 and 120 mg/kg BW/day respectively for 30 days. Red blood cell (RBC) count, white blood cell (WBC) count, Hb concentration, and clotting time have been shown to be significantly (*P < 0.05) reduced in a dose dependent manner in all exposed groups of rats comparing to the control. It has also been shown that BPS increases total serum glucose and protein concentration in the exposed groups of rats. We have observed that BPS increases serum total cholesterol, triglyceride, glycerol free triglyceride, low density lipoprotein (LDL) and very low density lipoprotein (VLDL) concentration, whereas high density lipoprotein (HDL) concentration has been found to be reduced in the exposed groups. BPS significantly increases serum aspartate aminotransferase (AST), alanine aminotransferase (ALT) and alkaline phosphatase (ALP) activities dose dependently. Moreover, serum calcium, bilirubin and urea concentration have been observed to be increased in all exposed groups. In conclusion, BPS probably impairs the functions of blood and promotes cardiovascular risks in rats.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
Modelling interactions of toxicants and density dependence in wildlife populations
Schipper, Aafke M.; Hendriks, Harrie W.M.; Kauffman, Matthew J.; Hendriks, A. Jan; Huijbregts, Mark A.J.
2013-01-01
1. A major challenge in the conservation of threatened and endangered species is to predict population decline and design appropriate recovery measures. However, anthropogenic impacts on wildlife populations are notoriously difficult to predict due to potentially nonlinear responses and interactions with natural ecological processes like density dependence. 2. Here, we incorporated both density dependence and anthropogenic stressors in a stage-based matrix population model and parameterized it for a density-dependent population of peregrine falcons Falco peregrinus exposed to two anthropogenic toxicants [dichlorodiphenyldichloroethylene (DDE) and polybrominated diphenyl ethers (PBDEs)]. Log-logistic exposure–response relationships were used to translate toxicant concentrations in peregrine falcon eggs to effects on fecundity. Density dependence was modelled as the probability of a nonbreeding bird acquiring a breeding territory as a function of the current number of breeders. 3. The equilibrium size of the population, as represented by the number of breeders, responded nonlinearly to increasing toxicant concentrations, showing a gradual decrease followed by a relatively steep decline. Initially, toxicant-induced reductions in population size were mitigated by an alleviation of the density limitation, that is, an increasing probability of territory acquisition. Once population density was no longer limiting, the toxicant impacts were no longer buffered by an increasing proportion of nonbreeders shifting to the breeding stage, resulting in a strong decrease in the equilibrium number of breeders. 4. Median critical exposure concentrations, that is, median toxicant concentrations in eggs corresponding with an equilibrium population size of zero, were 33 and 46 μg g−1 fresh weight for DDE and PBDEs, respectively. 5. Synthesis and applications. Our modelling results showed that particular life stages of a density-limited population may be relatively insensitive to toxicant impacts until a critical threshold is crossed. In our study population, toxicant-induced changes were observed in the equilibrium number of nonbreeding rather than breeding birds, suggesting that monitoring efforts including both life stages are needed to timely detect population declines. Further, by combining quantitative exposure–response relationships with a wildlife demographic model, we provided a method to quantify critical toxicant thresholds for wildlife population persistence.
NASA Astrophysics Data System (ADS)
Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe
2016-08-01
In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.
Measurements of scalar released from point sources in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Talluru, K. M.; Hernandez-Silva, C.; Philip, J.; Chauhan, K. A.
2017-04-01
Measurements of velocity and concentration fluctuations for a horizontal plume released at several wall-normal locations in a turbulent boundary layer (TBL) are discussed in this paper. The primary objective of this study is to establish a systematic procedure to acquire accurate single-point concentration measurements for a substantially long time so as to obtain converged statistics of long tails of probability density functions of concentration. Details of the calibration procedure implemented for long measurements are presented, which include sensor drift compensation to eliminate the increase in average background concentration with time. While most previous studies reported measurements where the source height is limited to, {{s}z}/δ ≤slant 0.2 , where s z is the wall-normal source height and δ is the boundary layer thickness, here results of concentration fluctuations when the plume is released in the outer layer are emphasised. Results of mean and root-mean-square (r.m.s.) profiles of concentration for elevated sources agree with the well-accepted reflected Gaussian model (Fackrell and Robins 1982 J. Fluid. Mech. 117). However, there is clear deviation from the reflected Gaussian model for source in the intermittent region of TBL particularly at locations higher than the source itself. Further, we find that the plume half-widths are different for the mean and r.m.s. concentration profiles. Long sampling times enabled us to calculate converged probability density functions at high concentrations and these are found to exhibit exponential distribution.
NASA Astrophysics Data System (ADS)
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2017-02-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo
2017-02-01
Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.
Characteristic Structure of Star-forming Clouds
NASA Astrophysics Data System (ADS)
Myers, Philip C.
2015-06-01
This paper presents a new method to diagnose the star-forming potential of a molecular cloud region from the probability density function of its column density (N-pdf). This method provides expressions for the column density and mass profiles of a symmetric filament having the same N-pdf as a filamentary region. The central concentration of this characteristic filament can distinguish regions and can quantify their fertility for star formation. Profiles are calculated for N-pdfs which are pure lognormal, pure power law, or a combination. In relation to models of singular polytropic cylinders, characteristic filaments can be unbound, bound, or collapsing depending on their central concentration. Such filamentary models of the dynamical state of N-pdf gas are more relevant to star-forming regions than are spherical collapse models. The star formation fertility of a bound or collapsing filament is quantified by its mean mass accretion rate when in radial free fall. For a given mass per length, the fertility increases with the filament mean column density and with its initial concentration. In selected regions the fertility of their characteristic filaments increases with the level of star formation.
Stochastic Forecasting of Algae Blooms in Lakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.
We consider the development of harmful algae blooms (HABs) in a lake with uncertain nutrients inflow. Two general frameworks, Fokker-Planck equation and the PDF methods, are developed to quantify the resultant concentration uncertainty of various algae groups, via deriving a deterministic equation of their joint probability density function (PDF). A computational example is examined to study the evolution of cyanobacteria (the blue-green algae) and the impacts of initial concentration and inflow-outflow ratio.
NASA Astrophysics Data System (ADS)
Simonin, Olivier; Zaichik, Leonid I.; Alipchenkov, Vladimir M.; Février, Pierre
2006-12-01
The objective of the paper is to elucidate a connection between two approaches that have been separately proposed for modelling the statistical spatial properties of inertial particles in turbulent fluid flows. One of the approaches proposed recently by Février, Simonin, and Squires [J. Fluid Mech. 533, 1 (2005)] is based on the partitioning of particle turbulent velocity field into spatially correlated (mesoscopic Eulerian) and random-uncorrelated (quasi-Brownian) components. The other approach stems from a kinetic equation for the two-point probability density function of the velocity distributions of two particles [Zaichik and Alipchenkov, Phys. Fluids 15, 1776 (2003)]. Comparisons between these approaches are performed for isotropic homogeneous turbulence and demonstrate encouraging agreement.
2011-03-01
capability of FTS to estimate plume effluent concentrations by comparing intrusive measurements of aircraft engine exhaust with those from an FTS. A... turbojet engine. Temporal averaging was used to reduce SCAs in the spectra, and spatial maps of temperature and concentration were generated. The time...density function ( PDF ) is the de- fined as the derivative of the CDF, and describes the probability of obtaining a given value of X. For a normally
NASA Astrophysics Data System (ADS)
Bakosi, J.; Franzese, P.; Boybeyi, Z.
2007-11-01
Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.
Basis adaptation in homogeneous chaos spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Ghanem, Roger
2014-02-01
We present a new meth for the characterization of subspaces associated with low-dimensional quantities of interet (QoI). The probability density function of these QoI is found to be concentrated around one-dimensional subspaces for which we develop projection operators. Our approach builds on the properties of Gaussian Hilbert spaces and associated tensor product spaces.
Defect structure in electrodeposited nanocrystalline Ni layers with different Mo concentrations
NASA Astrophysics Data System (ADS)
Kapoor, Garima; Péter, László; Fekete, Éva; Gubicza, Jenő
2018-05-01
The effect of molybdenum (Mo) alloying on the lattice defect structure in electrodeposited nanocrystalline nickel (Ni) films was studied. The electrodeposited layers were prepared on copper substrate at room temperature, with a constant current density and pH value. The chemical composition of these layers was determined by EDS. In addition, X-ray diffraction line profile analysis was carried out to study the microstructural parameters such as the crystallite size, the dislocation density and the stacking fault probability. It was found that the higher Mo content yielded more than one order of magnitude larger dislocation density while the crystallite size was only slightly smaller. In addition, the twin boundary formation activity during deposition increased with increasing Mo concentration. The results obtained on electrodeposited layers were compared with previous research carried out on bulk nanocrystalline Ni-Mo materials with similar compositions but processed by severe plastic deformation.
Sato, Tatsuhiko; Manabe, Kentaro; Hamada, Nobuyuki
2014-01-01
The risk of internal exposure to 137Cs, 134Cs, and 131I is of great public concern after the accident at the Fukushima-Daiichi nuclear power plant. The relative biological effectiveness (RBE, defined herein as effectiveness of internal exposure relative to the external exposure to γ-rays) is occasionally believed to be much greater than unity due to insufficient discussions on the difference of their microdosimetric profiles. We therefore performed a Monte Carlo particle transport simulation in ideally aligned cell systems to calculate the probability densities of absorbed doses in subcellular and intranuclear scales for internal exposures to electrons emitted from 137Cs, 134Cs, and 131I, as well as the external exposure to 662 keV photons. The RBE due to the inhomogeneous radioactive isotope (RI) distribution in subcellular structures and the high ionization density around the particle trajectories was then derived from the calculated microdosimetric probability density. The RBE for the bystander effect was also estimated from the probability density, considering its non-linear dose response. The RBE due to the high ionization density and that for the bystander effect were very close to 1, because the microdosimetric probability densities were nearly identical between the internal exposures and the external exposure from the 662 keV photons. On the other hand, the RBE due to the RI inhomogeneity largely depended on the intranuclear RI concentration and cell size, but their maximum possible RBE was only 1.04 even under conservative assumptions. Thus, it can be concluded from the microdosimetric viewpoint that the risk from internal exposures to 137Cs, 134Cs, and 131I should be nearly equivalent to that of external exposure to γ-rays at the same absorbed dose level, as suggested in the current recommendations of the International Commission on Radiological Protection. PMID:24919099
Cylinders out of a top hat: counts-in-cells for projected densities
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Pichon, Christophe; Codis, Sandrine; L'Huillier, Benjamin; Kim, Juhan; Bernardeau, Francis; Park, Changbom; Prunet, Simon
2018-06-01
Large deviation statistics is implemented to predict the statistics of cosmic densities in cylinders applicable to photometric surveys. It yields few per cent accurate analytical predictions for the one-point probability distribution function (PDF) of densities in concentric or compensated cylinders; and also captures the density dependence of their angular clustering (cylinder bias). All predictions are found to be in excellent agreement with the cosmological simulation Horizon Run 4 in the quasi-linear regime where standard perturbation theory normally breaks down. These results are combined with a simple local bias model that relates dark matter and tracer densities in cylinders and validated on simulated halo catalogues. This formalism can be used to probe cosmology with existing and upcoming photometric surveys like DES, Euclid or WFIRST containing billions of galaxies.
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose Phonotactic probability or neighborhood density have predominately been defined using gross distinctions (i.e., low vs. high). The current studies examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method The full range of probability or density was examined by sampling five nonwords from each of four quartiles. Three- and 5-year-old children received training on nonword-nonobject pairs. Learning was measured in a picture-naming task immediately following training and 1-week after training. Results were analyzed using multi-level modeling. Results A linear spline model best captured nonlinearities in phonotactic probability. Specifically word learning improved as probability increased in the lowest quartile, worsened as probability increased in the midlow quartile, and then remained stable and poor in the two highest quartiles. An ordinary linear model sufficiently described neighborhood density. Here, word learning improved as density increased across all quartiles. Conclusion Given these different patterns, phonotactic probability and neighborhood density appear to influence different word learning processes. Specifically, phonotactic probability may affect recognition that a sound sequence is an acceptable word in the language and is a novel word for the child, whereas neighborhood density may influence creation of a new representation in long-term memory. PMID:23882005
Early Fluid and Protein Shifts in Men During Water Immersion
NASA Technical Reports Server (NTRS)
Hinghofer-Szalkay, H.; Harrison, M. H.; Greenleaf, J. E.
1987-01-01
High precision blood and plasma densitometry was used to measure transvascular fluid shifts during water immersion to the neck. Six men (28-49 years) undertook 30 min of standing immersion in water at 35.0 +/- 0.2 C; immersion was preceded by 30 min control standing in air at 28 +/- 1 C. Blood was sampled from an antecubital catheter for determination of Blood Density (BD), Plasma Density (PD), Haematocrit (Ht), total Plasma Protein Concentration (PPC), and Plasma Albumin Concentration (PAC). Compared to control, significant decreases (p less than 0.01) in all these measures were observed after 20 min immersion. At 30 min, plasma volume had increased by 11.0 +/- 2.8%; the average density of the fluid shifted from extravascular fluid into the vascular compartment was 1006.3 g/l; albumin moved with the fluid and its albumin concentration was about one-third of the plasma protein concentration during early immersion. These calculations are based on the assumption that the F-cell ratio remained unchanged. No changes in erythrocyte water content during immersion were found. Thus, immersion-induced haemodilution is probably accompanied by protein (mainly albumin) augmentation which accompanies the intra-vascular fluid shift.
ERIC Educational Resources Information Center
Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.
2010-01-01
Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…
Modification of Lightweight Aggregates' Microstructure by Used Motor Oil Addition.
Franus, Małgorzata; Jozefaciuk, Grzegorz; Bandura, Lidia; Lamorski, Krzysztof; Hajnos, Mieczysław; Franus, Wojciech
2016-10-18
An admixture of lightweight aggregate substrates (beidellitic clay containing 10 wt % of natural clinoptilolite or Na-P1 zeolite) with used motor oil (1 wt %-8 wt %) caused marked changes in the aggregates' microstructure, measured by a combination of mercury porosimetry (MIP), microtomography (MT), and scanning electron microscopy. Maximum porosity was produced at low (1%-2%) oil concentrations and it dropped at higher concentrations, opposite to the aggregates' bulk density. Average pore radii, measured by MIP, decreased with an increasing oil concentration, whereas larger (MT) pore sizes tended to increase. Fractal dimension, derived from MIP data, changed similarly to the MIP pore radius, while that derived from MT remained unaltered. Solid phase density, measured by helium pycnometry, initially dropped slightly and then increased with the amount of oil added, which was most probably connected to changes in the formation of extremely small closed pores that were not available for He atoms.
Modification of Lightweight Aggregates’ Microstructure by Used Motor Oil Addition
Franus, Małgorzata; Jozefaciuk, Grzegorz; Bandura, Lidia; Lamorski, Krzysztof; Hajnos, Mieczysław; Franus, Wojciech
2016-01-01
An admixture of lightweight aggregate substrates (beidellitic clay containing 10 wt % of natural clinoptilolite or Na-P1 zeolite) with used motor oil (1 wt %–8 wt %) caused marked changes in the aggregates’ microstructure, measured by a combination of mercury porosimetry (MIP), microtomography (MT), and scanning electron microscopy. Maximum porosity was produced at low (1%–2%) oil concentrations and it dropped at higher concentrations, opposite to the aggregates’ bulk density. Average pore radii, measured by MIP, decreased with an increasing oil concentration, whereas larger (MT) pore sizes tended to increase. Fractal dimension, derived from MIP data, changed similarly to the MIP pore radius, while that derived from MT remained unaltered. Solid phase density, measured by helium pycnometry, initially dropped slightly and then increased with the amount of oil added, which was most probably connected to changes in the formation of extremely small closed pores that were not available for He atoms. PMID:28773964
Maret, T.R.; Cain, D.J.; MacCoy, D.E.; Short, T.M.
2003-01-01
Benthic macroinvertebrate assemblages, environmental variables, and associated mine density were evaluated during the summer of 2000 at 18 reference and test sites in the Coeur d'Alene and St. Regis River basins, northwestern USA as part of the US Geological Survey's National Water-Quality Assessment Program. Concentrations of Cd, Pb, and Zn in water and (or) streambed sediment at test sites in basins where production mine density was ???0.2 mines/km2 (in a 500-m stream buffer) were significantly higher than concentrations at reference sites. Zn and Pb were identified as the primary contaminants in water and streambed sediment, respectively. These metal concentrations often exceeded acute Ambient Water Quality Criteria for aquatic life and the National Oceanic and Atmospheric Administration Probable Effect Level for streambed sediment. Regression analysis identified significant correlations between production mine density in each basin and Zn concentrations in water and Pb in streambed sediment (r2 = 0.69 and 0.65, p < 0.01). Metal concentrations in caddisfly tissue, used to verify site-specific exposures of benthos, also were highest at sites downstream from intensive mining. Benthic invertebrate taxa richness and densities were lower at sites downstream than upstream of areas of intensive hard-rock mining and associated metal enrichment. Benthic invertebrate metrics that were most effective in discriminating changes in assemblage structure between reference and mining sites were total number of taxa, number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, and densities of total individuals, EPT individuals, and metal-sensitive Ephemeroptera individuals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greb, Arthur; Niemi, Kari; O'Connell, Deborah
2013-12-09
Plasma parameters and dynamics in capacitively coupled oxygen plasmas are investigated for different surface conditions. Metastable species concentration, electronegativity, spatial distribution of particle densities as well as the ionization dynamics are significantly influenced by the surface loss probability of metastable singlet delta oxygen (SDO). Simulated surface conditions are compared to experiments in the plasma-surface interface region using phase resolved optical emission spectroscopy. It is demonstrated how in-situ measurements of excitation features can be used to determine SDO surface loss probabilities for different surface materials.
Hurford, Amy; Hebblewhite, Mark; Lewis, Mark A
2006-11-01
A reduced probability of finding mates at low densities is a frequently hypothesized mechanism for a component Allee effect. At low densities dispersers are less likely to find mates and establish new breeding units. However, many mathematical models for an Allee effect do not make a distinction between breeding group establishment and subsequent population growth. Our objective is to derive a spatially explicit mathematical model, where dispersers have a reduced probability of finding mates at low densities, and parameterize the model for wolf recolonization in the Greater Yellowstone Ecosystem (GYE). In this model, only the probability of establishing new breeding units is influenced by the reduced probability of finding mates at low densities. We analytically and numerically solve the model to determine the effect of a decreased probability in finding mates at low densities on population spread rate and density. Our results suggest that a reduced probability of finding mates at low densities may slow recolonization rate.
Mechanistic modelling of Middle Eocene atmospheric carbon dioxide using fossil plant material
NASA Astrophysics Data System (ADS)
Grein, Michaela; Roth-Nebelsick, Anita; Wilde, Volker; Konrad, Wilfried; Utescher, Torsten
2010-05-01
Various proxies (such as pedogenic carbonates, boron isotopes or phytoplankton) and geochemical models were applied in order to reconstruct palaeoatmospheric carbon dioxide, partially providing conflicting results. Another promising proxy is the frequency of stomata (pores on the leaf surface used for gaseous exchange). In this project, fossil plant material from the Messel Pit (Hesse, Germany) is used to reconstruct atmospheric carbon dioxide concentration in the Middle Eocene by analyzing stomatal density. We applied the novel mechanistic-theoretical approach of Konrad et al. (2008) which provides a quantitative derivation of the stomatal density response (number of stomata per leaf area) to varying atmospheric carbon dioxide concentration. The model couples 1) C3-photosynthesis, 2) the process of diffusion and 3) an optimisation principle providing maximum photosynthesis (via carbon dioxide uptake) and minimum water loss (via stomatal transpiration). These three sub-models also include data of the palaeoenvironment (temperature, water availability, wind velocity, atmospheric humidity, precipitation) and anatomy of leaf and stoma (depth, length and width of stomatal porus, thickness of assimilation tissue, leaf length). In order to calculate curves of stomatal density as a function of atmospheric carbon dioxide concentration, various biochemical parameters have to be borrowed from extant representatives. The necessary palaeoclimate data are reconstructed from the whole Messel flora using Leaf Margin Analysis (LMA) and the Coexistence Approach (CA). In order to obtain a significant result, we selected three species from which a large number of well-preserved leaves is available (at least 20 leaves per species). Palaeoclimate calculations for the Middle Eocene Messel Pit indicate a warm and humid climate with mean annual temperature of approximately 22°C, up to 2540 mm mean annual precipitation and the absence of extended periods of drought. Mean relative air humidity was probably rather high, up to 77%. The combined results of the three selected plant taxa indicate values for atmospheric carbon dioxide concentration between 700 and 1100 ppm (probably about 900 ppm). Reference: Konrad, W., Roth-Nebelsick, A., Grein, M. (2008). Modelling of stomatal density response to atmospheric CO2. Journal of Theoretical Biology 253(4): 638-658.
Force Density Function Relationships in 2-D Granular Media
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Metzger, Philip T.; Kilts, Kelly N.
2004-01-01
An integral transform relationship is developed to convert between two important probability density functions (distributions) used in the study of contact forces in granular physics. Developing this transform has now made it possible to compare and relate various theoretical approaches with one another and with the experimental data despite the fact that one may predict the Cartesian probability density and another the force magnitude probability density. Also, the transforms identify which functional forms are relevant to describe the probability density observed in nature, and so the modified Bessel function of the second kind has been identified as the relevant form for the Cartesian probability density corresponding to exponential forms in the force magnitude distribution. Furthermore, it is shown that this transform pair supplies a sufficient mathematical framework to describe the evolution of the force magnitude distribution under shearing. Apart from the choice of several coefficients, whose evolution of values must be explained in the physics, this framework successfully reproduces the features of the distribution that are taken to be an indicator of jamming and unjamming in a granular packing. Key words. Granular Physics, Probability Density Functions, Fourier Transforms
Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws
NASA Astrophysics Data System (ADS)
Barré, J.; Bernardin, C.; Chetrite, R.
2018-02-01
We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.
Probabilistic simulation of stress concentration in composite laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, L.
1993-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The probabilistic composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties while probabilistic finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate stress concentration factors such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using it to simulate the stress concentration factors in composite laminates made from three different composite systems. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the stress concentration factors are influenced by local stiffness variables, by load eccentricities and by initial stress fields.
Körbahti, Bahadır K; Taşyürek, Selin
2015-03-01
Electrochemical oxidation and process optimization of ampicillin antibiotic at boron-doped diamond electrodes (BDD) were investigated in a batch electrochemical reactor. The influence of operating parameters, such as ampicillin concentration, electrolyte concentration, current density, and reaction temperature, on ampicillin removal, COD removal, and energy consumption was analyzed in order to optimize the electrochemical oxidation process under specified cost-driven constraints using response surface methodology. Quadratic models for the responses satisfied the assumptions of the analysis of variance well according to normal probability, studentized residuals, and outlier t residual plots. Residual plots followed a normal distribution, and outlier t values indicated that the approximations of the fitted models to the quadratic response surfaces were very good. Optimum operating conditions were determined at 618 mg/L ampicillin concentration, 3.6 g/L electrolyte concentration, 13.4 mA/cm(2) current density, and 36 °C reaction temperature. Under response surface optimized conditions, ampicillin removal, COD removal, and energy consumption were obtained as 97.1 %, 92.5 %, and 71.7 kWh/kg CODr, respectively.
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
NASA Astrophysics Data System (ADS)
Deshler, Terry
2016-04-01
Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.
A Simple Probabilistic Combat Model
2016-06-13
This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...case model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons...since the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Back in the saddle: large-deviation statistics of the cosmic log-density field
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.
2016-08-01
We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.
In vivo NMR imaging of sodium-23 in the human head.
Hilal, S K; Maudsley, A A; Ra, J B; Simon, H E; Roschmann, P; Wittekoek, S; Cho, Z H; Mun, S K
1985-01-01
We report the first clinical nuclear magnetic resonance (NMR) images of cerebral sodium distribution in normal volunteers and in patients with a variety of pathological lesions. We have used a 1.5 T NMR magnet system. When compared with proton distribution, sodium shows a greater variation in its concentration from tissue to tissue and from normal to pathological conditions. Image contrast calculated on the basis of sodium concentration is 7 to 18 times greater than that of proton spin density. Normal images emphasize the extracellular compartments. In the clinical studies, areas of recent or old cerebral infarction and tumors show a pronounced increase of sodium content (300-400%). Actual measurements of image density values indicate that there is probably a further accentuation of the contrast by the increased "NMR visibility" of sodium in infarcted tissue. Sodium imaging may prove to be a more sensitive means for early detection of some brain disorders than other imaging methods.
NASA Astrophysics Data System (ADS)
Sun, Qiang; Selloni, Annabella; Myers, T. H.; Doolittle, W. Alan
2006-11-01
Density functional theory calculations of oxygen adsorption and incorporation at the polar GaN(0001) and GaN(0001¯) surfaces have been carried out to explain the experimentally observed reduced oxygen concentration in GaN samples grown by molecular beam epitaxy in the presence of high energy (˜10keV) electron beam irradiation [Myers , J. Vac. Sci. Technol. B 18, 2295 (2000)]. Using a model in which the effect of the irradiation is to excite electrons from the valence to the conduction band, we find that both the energy cost of incorporating oxygen impurities in deeper layers and the oxygen adatom diffusion barriers are significantly reduced in the presence of the excitation. The latter effect leads to a higher probability for two O adatoms to recombine and desorb, and thus to a reduced oxygen concentration in the irradiated samples, consistent with experimental observations.
Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard
2018-01-01
Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.
Estimating loblolly pine size-density trajectories across a range of planting densities
Curtis L. VanderSchaaf; Harold E. Burkhart
2013-01-01
Size-density trajectories on the logarithmic (ln) scale are generally thought to consist of two major stages. The first is often referred to as the density-independent mortality stage where the probability of mortality is independent of stand density; in the second, often referred to as the density-dependent mortality or self-thinning stage, the probability of...
Technical Report 1205: A Simple Probabilistic Combat Model
2016-07-08
This page intentionally left blank. 1. INTRODUCTION The Lanchester combat model1 is a simple way to assess the effects of quantity and quality...model. For the random case, assume R red weapons are allocated to B blue weapons randomly. We are interested in the distribution of weapons assigned...the initial condition is very close to the break even line. What is more interesting is that the probability density tends to concentrate at either a
NASA Astrophysics Data System (ADS)
Dietterich, H. R.; Stelten, M. E.; Downs, D. T.; Champion, D. E.
2017-12-01
Harrat Rahat is a predominantly mafic, 20,000 km2 volcanic field in western Saudi Arabia with an elongate volcanic axis extending 310 km north-south. Prior mapping suggests that the youngest eruptions were concentrated in northernmost Harrat Rahat, where our new geologic mapping and geochronology reveal >300 eruptive vents with ages ranging from 1.2 Ma to a historic eruption in 1256 CE. Eruption compositions and styles vary spatially and temporally within the volcanic field, where extensive alkali basaltic lavas dominate, but more evolved compositions erupted episodically as clusters of trachytic domes and small-volume pyroclastic flows. Analysis of vent locations, compositions, and eruption styles shows the evolution of the volcanic field and allows assessment of the spatio-temporal probabilities of vent opening and eruption styles. We link individual vents and fissures to eruptions and their deposits using field relations, petrography, geochemistry, paleomagnetism, and 40Ar/39Ar and 36Cl geochronology. Eruption volumes and deposit extents are derived from geologic mapping and topographic analysis. Spatial density analysis with kernel density estimation captures vent densities of up to 0.2 %/km2 along the north-south running volcanic axis, decaying quickly away to the east but reaching a second, lower high along a secondary axis to the west. Temporal trends show slight younging of mafic eruption ages to the north in the past 300 ka, as well as clustered eruptions of trachytes over the past 150 ka. Vent locations, timing, and composition are integrated through spatial probability weighted by eruption age for each compositional range to produce spatio-temporal models of vent opening probability. These show that the next mafic eruption is most probable within the north end of the main (eastern) volcanic axis, whereas more evolved compositions are most likely to erupt within the trachytic centers further to the south. These vent opening probabilities, combined with corresponding eruption properties, can be used as the basis for lava flow and tephra fall hazard maps.
ERIC Educational Resources Information Center
Storkel, Holly L.; Bontempo, Daniel E.; Aschenbrenner, Andrew J.; Maekawa, Junko; Lee, Su-Yeon
2013-01-01
Purpose: Phonotactic probability or neighborhood density has predominately been defined through the use of gross distinctions (i.e., low vs. high). In the current studies, the authors examined the influence of finer changes in probability (Experiment 1) and density (Experiment 2) on word learning. Method: The authors examined the full range of…
Crock, J.G.; Severson, R.C.; Gough, L.P.
1992-01-01
Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell for Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.Recent investigations on the Kenai Peninsula had two major objectives: (1) to establish elemental baseline concentrations ranges for native vegetation and soils; and, (2) to determine the sampling density required for preparing stable regional geochemical maps for various elements in native plants and soils. These objectives were accomplished using an unbalanced, nested analysis-of-variance (ANOVA) barbell sampling design. Hylocomium splendens (Hedw.) BSG (feather moss, whole plant), Picea glauca (Moench) Voss (white spruce, twigs and needles), and soil horizons (02 and C) were collected and analyzed for major and trace total element concentrations. Using geometric means and geometric deviations, expected baseline ranges for elements were calculated. Results of the ANOVA show that intensive soil or plant sampling is needed to reliably map the geochemistry of the area, due to large local variability. For example, producing reliable element maps of feather moss using a 50 km cell (at 95% probability) would require sampling densities of from 4 samples per cell Al, Co, Fe, La, Li, and V, to more than 15 samples per cell for Cu, Pb, Se, and Zn.
Robust location and spread measures for nonparametric probability density function estimation.
López-Rubio, Ezequiel
2009-10-01
Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.
Wavelet investigation of preferential concentration in particle-laden turbulence
NASA Astrophysics Data System (ADS)
Bassenne, Maxime; Urzay, Javier; Schneider, Kai; Moin, Parviz
2017-11-01
Direct numerical simulations of particle-laden homogeneous-isotropic turbulence are employed in conjunction with wavelet multi-resolution analyses to study preferential concentration in both physical and spectral spaces. Spatially-localized energy spectra for velocity, vorticity and particle-number density are computed, along with their spatial fluctuations that enable the quantification of scale-dependent probability density functions, intermittency and inter-phase conditional statistics. The main result is that particles are found in regions of lower turbulence spectral energy than the corresponding mean. This suggests that modeling the subgrid-scale turbulence intermittency is required for capturing the small-scale statistics of preferential concentration in large-eddy simulations. Additionally, a method is defined that decomposes a particle number-density field into the sum of a coherent and an incoherent components. The coherent component representing the clusters can be sparsely described by at most 1.6% of the total number of wavelet coefficients. An application of the method, motivated by radiative-heat-transfer simulations, is illustrated in the form of a grid-adaptation algorithm that results in non-uniform meshes refined around particle clusters. It leads to a reduction of the number of control volumes by one to two orders of magnitude. PSAAP-II Center at Stanford (Grant DE-NA0002373).
ERIC Educational Resources Information Center
Storkel, Holly L.; Hoover, Jill R.
2011-01-01
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV…
Moran, Michael J.; Zogorski, John S.; Squillace, Paul J.
2004-01-01
The occurrence and implications of methyl tert-butyl ether (MTBE) and gasoline hydrocarbons were examined in three surveys of water quality conducted by the U.S. Geological Survey?one national-scale survey of ground water, one national-scale survey of source water from ground water, and one regional-scale survey of drinking water from ground water. The overall detection frequency of MTBE in all three surveys was similar to the detection frequencies of some other volatile organic compounds (VOCs) that have much longer production and use histories in the United States. The detection frequency of MTBE was higher in drinking water and lower in source water and ground water. However, when the data for ground water and source water were limited to the same geographic extent as drinking-water data, the detection frequencies of MTBE were comparable to the detection frequency of MTBE in drinking water. In all three surveys, the detection frequency of any gasoline hydrocarbon was less than the detection frequency of MTBE. No concentration of MTBE in source water exceeded the lower limit of U.S. Environmental Protection Agency's Drinking-Water Advisory of 20 ?g/L (micrograms per liter). One concentration of MTBE in ground water exceeded 20 ?g/L, and 0.9 percent of drinking-water samples exceeded 20 ?g/L. The overall detection frequency of MTBE relative to other widely used VOCs indicates that MTBE is an important concern with respect to ground-water management. The probability of detecting MTBE was strongly associated with population density, use of MTBE in gasoline, and recharge, and weakly associated with density of leaking underground storage tanks, soil permeability, and aquifer consolidation. Only concentrations of MTBE above 0.5 ?g/L were associated with dissolved oxygen. Ground water underlying areas with high population density, ground water underlying areas where MTBE is used as a gasoline oxygenate, and ground water underlying areas with high recharge has a greater probability of MTBE contamination. Ground water from public-supply wells and shallow ground water underlying urban land-use areas has a greater probability of MTBE contamination compared to ground water from domestic wells and ground water underlying rural land-use areas.
Ozone reactions with indoor materials during building disinfection
NASA Astrophysics Data System (ADS)
Poppendieck, D.; Hubbard, H.; Ward, M.; Weschler, C.; Corsi, R. L.
There is scant information related to heterogeneous indoor chemistry at ozone concentrations necessary for the effective disinfection of buildings, i.e., hundreds to thousands of ppm. In the present study, 24 materials were exposed for 16 h to ozone concentrations of 1000-1200 ppm in the inlet streams of test chambers. Initial ozone deposition velocities were similar to those reported in the published literature for much lower ozone concentrations, but decayed rapidly as reaction sites on material surfaces were consumed. For every material, deposition velocities converged to a relatively constant, and typically low, value after approximately 11 h. The four materials with the highest sustained deposition velocities were ceiling tile, office partition, medium density fiberboard and gypsum wallboard backing. Analysis of ozone reaction probabilities indicated that throughout each experiment, and particularly after several hours of disinfection, surface reaction resistance dominated the overall resistance to ozone deposition for nearly all materials. Total building disinfection by-products (all carbonyls) were quantified per unit area of each material for the experimental period. Paper, office partition, and medium density fiberboard each released greater than 38 mg m -2 of by-products.
A wave function for stock market returns
NASA Astrophysics Data System (ADS)
Ataullah, Ali; Davidson, Ian; Tippett, Mark
2009-02-01
The instantaneous return on the Financial Times-Stock Exchange (FTSE) All Share Index is viewed as a frictionless particle moving in a one-dimensional square well but where there is a non-trivial probability of the particle tunneling into the well’s retaining walls. Our analysis demonstrates how the complementarity principle from quantum mechanics applies to stock market prices and of how the wave function presented by it leads to a probability density which exhibits strong compatibility with returns earned on the FTSE All Share Index. In particular, our analysis shows that the probability density for stock market returns is highly leptokurtic with slight (though not significant) negative skewness. Moreover, the moments of the probability density determined under the complementarity principle employed here are all convergent - in contrast to many of the probability density functions on which the received theory of finance is based.
A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.
2016-10-01
Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.
Stochastic modeling of soil salinity
NASA Astrophysics Data System (ADS)
Suweis, S.; Porporato, A. M.; Daly, E.; van der Zee, S.; Maritan, A.; Rinaldo, A.
2010-12-01
A minimalist stochastic model of primary soil salinity is proposed, in which the rate of soil salinization is determined by the balance between dry and wet salt deposition and the intermittent leaching events caused by rainfall events. The equations for the probability density functions of salt mass and concentration are found by reducing the coupled soil moisture and salt mass balance equations to a single stochastic differential equation (generalized Langevin equation) driven by multiplicative Poisson noise. Generalized Langevin equations with multiplicative white Poisson noise pose the usual Ito (I) or Stratonovich (S) prescription dilemma. Different interpretations lead to different results and then choosing between the I and S prescriptions is crucial to describe correctly the dynamics of the model systems. We show how this choice can be determined by physical information about the timescales involved in the process. We also show that when the multiplicative noise is at most linear in the random variable one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We then apply these results to the generalized Langevin equation that drives the salt mass dynamics. The stationary analytical solutions for the probability density functions of salt mass and concentration provide insight on the interplay of the main soil, plant and climate parameters responsible for long term soil salinization. In particular, they show the existence of two distinct regimes, one where the mean salt mass remains nearly constant (or decreases) with increasing rainfall frequency, and another where mean salt content increases markedly with increasing rainfall frequency. As a result, relatively small reductions of rainfall in drier climates may entail dramatic shifts in longterm soil salinization trends, with significant consequences, e.g. for climate change impacts on rain fed agriculture.
Storkel, Holly L.; Lee, Jaehoon; Cox, Casey
2016-01-01
Purpose Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Method Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. Results The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. Conclusions As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise. PMID:27788276
Han, Min Kyung; Storkel, Holly L; Lee, Jaehoon; Cox, Casey
2016-11-01
Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.
Jenkins, M B; Endale, D M; Fisher, D S; Gay, P A
2009-02-01
To better understand the transport and enumeration of dilute densities of Escherichia coli O157:H7 in agricultural watersheds, we developed a culture-based, five tube-multiple dilution most probable number (MPN) method. The MPN method combined a filtration technique for large volumes of surface water with standard selective media, biochemical and immunological tests, and a TaqMan confirmation step. This method determined E. coli O157:H7 concentrations as low as 0.1 MPN per litre, with a 95% confidence level of 0.01-0.7 MPN per litre. Escherichia coli O157:H7 densities ranged from not detectable to 9 MPN per litre for pond inflow, from not detectable to 0.9 MPN per litre for pond outflow and from not detectable to 8.3 MPN per litre for within pond. The MPN methodology was extended to mass flux determinations. Fluxes of E. coli O157:H7 ranged from <27 to >10(4) MPN per hour. This culture-based method can detect small numbers of viable/culturable E. coli O157:H7 in surface waters of watersheds containing animal agriculture and wildlife. This MPN method will improve our understanding of the transport and fate of E. coli O157:H7 in agricultural watersheds, and can be the basis of collections of environmental E. coli O157:H7.
Lagerlöf, Jakob H; Kindblom, Jon; Cortez, Eliane; Pietras, Kristian; Bernhardt, Peter
2013-02-01
Hypoxia is one of the most important factors influencing clinical outcome after radiotherapy. Improved knowledge of factors affecting the levels and distribution of oxygen within a tumor is needed. The authors constructed a theoretical 3D model based on histological images to analyze the influence of vessel density and hemoglobin (Hb) concentration on the response to irradiation. The pancreases of a Rip-Tag2 mouse, a model of malignant insulinoma, were excised, cryosectioned, immunostained, and photographed. Vessels were identified by image thresholding and a 3D vessel matrix assembled. The matrix was reduced to functional vessel segments and enlarged by replication. The steady-state oxygen tension field of the tumor was calculated by iteratively employing Green's function method for diffusion and the Michaelis-Menten model for consumption. The impact of vessel density on the radiation response was studied by removing a number of randomly selected vessels. The impact of Hb concentration was studied by independently changing vessel oxygen partial pressure (pO(2)). For each oxygen distribution, the oxygen enhancement ratio (OER) was calculated and the mean absorbed dose at which the tumor control probability (TCP) was 0.99 (D(99)) was determined using the linear-quadratic cell survival model (LQ model). Decreased pO(2) shifted the oxygen distribution to lower values, whereas decreased vessel density caused the distribution to widen and shift to lower values. Combined scenarios caused lower-shifted distributions, emphasising log-normal characteristics. Vessel reduction combined with increased blood pO(2) caused the distribution to widen due to a lack of vessels. The most pronounced radiation effect of increased pO(2) occurred with tumor tissue with 50% of the maximum vessel density used in the simulations. A 51% decrease in D(99), from 123 to 60 Gy, was found between the lowest and highest pO(2) concentrations. Our results indicate that an intermediate vascular density region exists where enhanced blood oxygen concentration may be beneficial for radiation response. The results also suggest that it is possible to distinguish between diffusion-limited and anemic hypoxia from the characteristics of the pO(2) distribution.
Liu, Yuqiang; Chen, Cui; Liu, Yunlong; Li, Wei; Wang, Zhihong; Sun, Qifeng; Zhou, Hang; Chen, Xiangjun; Yu, Yongchun; Wang, Yun; Abumaria, Nashat
2018-06-19
The TRPM7 chanzyme contributes to several biological and pathological processes in different tissues. However, its role in the CNS under physiological conditions remains unclear. Here, we show that TRPM7 knockdown in hippocampal neurons reduces structural synapse density. The synapse density is rescued by the α-kinase domain in the C terminus but not by the ion channel region of TRPM7 or by increasing extracellular concentrations of Mg 2+ or Zn 2+ . Early postnatal conditional knockout of TRPM7 in mice impairs learning and memory and reduces synapse density and plasticity. TRPM7 knockdown in the hippocampus of adult rats also impairs learning and memory and reduces synapse density and synaptic plasticity. In knockout mice, restoring expression of the α-kinase domain in the brain rescues synapse density/plasticity and memory, probably by interacting with and phosphorylating cofilin. These results suggest that brain TRPM7 is important for having normal synaptic and cognitive functions under physiological, non-pathological conditions. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
Elastic moduli of cast Ti-Au, Ti-Ag, and Ti-Cu alloys.
Kikuchi, Masafumi; Takahashi, Masatoshi; Okuno, Osamu
2006-07-01
This study investigated the effect of alloying titanium with gold, silver, or copper on the elastic properties of the alloys. A series of binary titanium alloys was made with four concentrations of gold, silver, or copper (5, 10, 20, and 30 mass%) in an argon-arc melting furnace. The Young's moduli and Poisson's ratios of the alloy castings were determined with an ultrasonic-pulse method. The density of each alloy was previously measured by the Archimedes' principle. Results were analyzed using one-way ANOVA and the Scheffé's test. The densities of Ti-Au, Ti-Ag, and Ti-Cu alloys monotonically increased as the concentration of alloying elements increased. As the concentration of gold or silver increased to 20%, the Young's modulus significantly decreased, followed by a subsequent increase in value. As the concentration of copper increased, the Young's modulus monotonically increased. The Young's moduli of all the Ti-Cu alloys were significantly higher than that of the titanium. The density of all the experimental alloys was virtually independent of the alloy phases, while the Young's moduli and Poisson's ratios of the alloys were dependent. The addition of gold or silver slightly reduced the Young's modulus of the titanium when the alloy phase was single alpha. The increase in the Young's modulus of the Ti-Cu alloys is probably due to the precipitation of intermetallic compound Ti2Cu. Copper turned out to be a moderate stiffener that gains a Young's modulus of titanium up to 20% at the copper concentration of 30 mass%.
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyż, W.; Zalewski, K.
2005-10-01
It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
Online Reinforcement Learning Using a Probability Density Estimation.
Agostini, Alejandro; Celaya, Enric
2017-01-01
Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Taillefumier, Thibaud; Magnasco, Marcelo O
2013-04-16
Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss-Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.
Evaluating detection probabilities for American marten in the Black Hills, South Dakota
Smith, Joshua B.; Jenks, Jonathan A.; Klaver, Robert W.
2007-01-01
Assessing the effectiveness of monitoring techniques designed to determine presence of forest carnivores, such as American marten (Martes americana), is crucial for validation of survey results. Although comparisons between techniques have been made, little attention has been paid to the issue of detection probabilities (p). Thus, the underlying assumption has been that detection probabilities equal 1.0. We used presence-absence data obtained from a track-plate survey in conjunction with results from a saturation-trapping study to derive detection probabilities when marten occurred at high (>2 marten/10.2 km2) and low (???1 marten/10.2 km2) densities within 8 10.2-km2 quadrats. Estimated probability of detecting marten in high-density quadrats was p = 0.952 (SE = 0.047), whereas the detection probability for low-density quadrats was considerably lower (p = 0.333, SE = 0.136). Our results indicated that failure to account for imperfect detection could lead to an underestimation of marten presence in 15-52% of low-density quadrats in the Black Hills, South Dakota, USA. We recommend that repeated site-survey data be analyzed to assess detection probabilities when documenting carnivore survey results.
Spahr, Norman E.; Mueller, David K.; Wolock, David M.; Hitt, Kerie J.; Gronberg, JoAnn M.
2010-01-01
Data collected for the U.S. Geological Survey National Water-Quality Assessment program from 1992-2001 were used to investigate the relations between nutrient concentrations and nutrient sources, hydrology, and basin characteristics. Regression models were developed to estimate annual flow-weighted concentrations of total nitrogen and total phosphorus using explanatory variables derived from currently available national ancillary data. Different total-nitrogen regression models were used for agricultural (25 percent or more of basin area classified as agricultural land use) and nonagricultural basins. Atmospheric, fertilizer, and manure inputs of nitrogen, percent sand in soil, subsurface drainage, overland flow, mean annual precipitation, and percent undeveloped area were significant variables in the agricultural basin total nitrogen model. Significant explanatory variables in the nonagricultural total nitrogen model were total nonpoint-source nitrogen input (sum of nitrogen from manure, fertilizer, and atmospheric deposition), population density, mean annual runoff, and percent base flow. The concentrations of nutrients derived from regression (CONDOR) models were applied to drainage basins associated with the U.S. Environmental Protection Agency (USEPA) River Reach File (RF1) to predict flow-weighted mean annual total nitrogen concentrations for the conterminous United States. The majority of stream miles in the Nation have predicted concentrations less than 5 milligrams per liter. Concentrations greater than 5 milligrams per liter were predicted for a broad area extending from Ohio to eastern Nebraska, areas spatially associated with greater application of fertilizer and manure. Probabilities that mean annual total-nitrogen concentrations exceed the USEPA regional nutrient criteria were determined by incorporating model prediction uncertainty. In all nutrient regions where criteria have been established, there is at least a 50 percent probability of exceeding the criteria in more than half of the stream miles. Dividing calibration sites into agricultural and nonagricultural groups did not improve the explanatory capability for total phosphorus models. The group of explanatory variables that yielded the lowest model error for mean annual total phosphorus concentrations includes phosphorus input from manure, population density, amounts of range land and forest land, percent sand in soil, and percent base flow. However, the large unexplained variability and associated model error precluded the use of the total phosphorus model for nationwide extrapolations.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Nonstationary envelope process and first excursion probability.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
The definition of stationary random envelope proposed by Cramer and Leadbetter, is extended to the envelope of nonstationary random process possessing evolutionary power spectral densities. The density function, the joint density function, the moment function, and the crossing rate of a level of the nonstationary envelope process are derived. Based on the envelope statistics, approximate solutions to the first excursion probability of nonstationary random processes are obtained. In particular, applications of the first excursion probability to the earthquake engineering problems are demonstrated in detail.
NASA Astrophysics Data System (ADS)
Dioguardi, Fabio; Mele, Daniela
2018-03-01
This paper presents PYFLOW_2.0, a hazard tool for the calculation of the impact parameters of dilute pyroclastic density currents (DPDCs). DPDCs represent the dilute turbulent type of gravity flows that occur during explosive volcanic eruptions; their hazard is the result of their mobility and the capability to laterally impact buildings and infrastructures and to transport variable amounts of volcanic ash along the path. Starting from data coming from the analysis of deposits formed by DPDCs, PYFLOW_2.0 calculates the flow properties (e.g., velocity, bulk density, thickness) and impact parameters (dynamic pressure, deposition time) at the location of the sampled outcrop. Given the inherent uncertainties related to sampling, laboratory analyses, and modeling assumptions, the program provides ranges of variations and probability density functions of the impact parameters rather than single specific values; from these functions, the user can interrogate the program to obtain the value of the computed impact parameter at any specified exceedance probability. In this paper, the sedimentological models implemented in PYFLOW_2.0 are presented, program functionalities are briefly introduced, and two application examples are discussed so as to show the capabilities of the software in quantifying the impact of the analyzed DPDCs in terms of dynamic pressure, volcanic ash concentration, and residence time in the atmosphere. The software and user's manual are made available as a downloadable electronic supplement.
Combinatoric analysis of heterogeneous stochastic self-assembly.
D'Orsogna, Maria R; Zhao, Bingyu; Berenji, Bijan; Chou, Tom
2013-09-28
We analyze a fully stochastic model of heterogeneous nucleation and self-assembly in a closed system with a fixed total particle number M, and a fixed number of seeds Ns. Each seed can bind a maximum of N particles. A discrete master equation for the probability distribution of the cluster sizes is derived and the corresponding cluster concentrations are found using kinetic Monte-Carlo simulations in terms of the density of seeds, the total mass, and the maximum cluster size. In the limit of slow detachment, we also find new analytic expressions and recursion relations for the cluster densities at intermediate times and at equilibrium. Our analytic and numerical findings are compared with those obtained from classical mass-action equations and the discrepancies between the two approaches analyzed.
NASA Astrophysics Data System (ADS)
Oladi, Mahshid; Shokri, Mohammad Reza; Rajabi-Maham, Hassan
2017-06-01
The `Coral Health Chart' has become a popular tool for monitoring coral bleaching worldwide. The scleractinian coral Acropora downingi (Wallace 1999) is highly vulnerable to temperature anomalies in the Persian Gulf. Our study tested the reliability of Coral Health Chart scores for the assessment of bleaching-related changes in the mitotic index (MI) and density of zooxanthellae cells in A. downingi in Qeshm Island, the Persian Gulf. The results revealed that, at least under severe conditions, it can be used as an effective proxy for detecting changes in the density of normal, transparent, or degraded zooxanthellae and MI. However, its ability to discern changes in pigment concentration and total zooxanthellae density should be viewed with some caution in the Gulf region, probably because the high levels of environmental variability in this region result in inherent variations in the characteristics of zooxanthellae among "healthy" looking corals.
NASA Astrophysics Data System (ADS)
Leray, S.; De Dreuzy, J.; Aquilina, L.; Labasque, T.; Bour, O.
2011-12-01
While groundwater age data have been classically used to determine aquifer hydraulic properties such as recharge and/or porosity, we show here that they contain more valuable information on aquifer structure in complex hard rock contexts. Our numerical modeling study is based on the developed crystalline aquifer of Ploemeur (Brittany, France) characterized by two transmissive structures: the interface between an intruding granite and overlying micaschists dipping moderately to the North and a steeply dipping fault striking North 20. We explore the definition and evolution of the supplying volume to the pumping well of the Ploemeur medium under steady-state conditions. We first show that, with the help of general observations on the site, hydraulic data, such as piezometric levels or transmissivity derived from pumping tests, can be used to refine recharge spatial distribution and rate and bulk aquifer transmissivity. We then model the effect of aquifer porosity and thickness on environmental tracer concentrations. Porosity gives the range of the mean residence time, shifting the probability density function of residence times along the time axis whereas aquifer thickness affects the shape of the residence times distribution. It also modifies the mean concentration of CFCs taken as the convolution product of the atmospheric tracer concentration with the probability density function of residence times. Because porosity may be estimated by petrologic and gravimetric investigations, the thickness of the aquifer can be advantageously constrained by groundwater ages and then compared to other results from inversion of geophysical data. More generally, we advocate using groundwater age data at the aquifer discharge locations to constrain complex aquifer structures when recharge and porosity can be fixed by other means.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Postfragmentation density function for bacterial aggregates in laminar flow
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John
2014-01-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. PMID:21599205
Speech processing using conditional observable maximum likelihood continuity mapping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogden, John; Nix, David
A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less
Azari, Mansour R; Sadighzadeh, Asghar; Bayatian, Majid
2018-06-19
Accidents have happened in the chemical industries all over the world with serious consequences for the adjacent heavily populated areas. In this study, the impact of the probable hypothetical event, releasing considerable amounts of hydrogen fluoride (HF) as a strong irritant into the atmosphere over the city of Isfahan from a strategic chemical plant, was simulated by computational fluid dynamics (CFD). In this model, the meteorological parameters were integrated into time and space, and dispersion of the pollutants was estimated based on a probable accidental release of HF. According to the hypothetical results of the simulation model in this study, HF clouds reached Isfahan in 20 min and exposed 80% of the general public to HF concentration in the range of 0-34 ppm. Then, they dissipated 240 min after the time of the incident. Supposing the uniform population density within the proximity of the city of Isfahan with the population of 1.75 million, 5% of the population (87,500 people) could be exposed for a few minutes to a HF concentration as high as 34 ppm. This concentration is higher than a very hazardous concentration described as the Immediate Danger to Life and Health (30 ppm). This hypothetical risk evaluation of environmental exposure to HF with the potential of health risks was very instrumental for the general public of Isfahan in terms of risk management. Similar studies based on probable accidental scenarios along with the application of a simulation model for computation of dispersed pollutants are recommended for risk evaluation and management of cities in the developing countries with a fast pace of urbanization around the industrial sites.
NASA Astrophysics Data System (ADS)
Gomez, Jose Alfonso; Owens, Phillip N.; Koiter, Alex J.; Lobb, David
2016-04-01
One of the major sources of uncertainty in attributing sediment sources in fingerprinting studies is the uncertainty in determining the concentrations of the elements used in the mixing model due to the variability of the concentrations of these elements in the source materials (e.g., Kraushaar et al., 2015). The uncertainty in determining the "true" concentration of a given element in each one of the source areas depends on several factors, among them the spatial variability of that element, the sampling procedure and sampling density. Researchers have limited control over these factors, and usually sampling density tends to be sparse, limited by time and the resources available. Monte Carlo analysis has been used regularly in fingerprinting studies to explore the probable solutions within the measured variability of the elements in the source areas, providing an appraisal of the probability of the different solutions (e.g., Collins et al., 2012). This problem can be considered analogous to the propagation of uncertainty in hydrologic models due to uncertainty in the determination of the values of the model parameters, and there are many examples of Monte Carlo analysis of this uncertainty (e.g., Freeze, 1980; Gómez et al., 2001). Some of these model analyses rely on the simulation of "virtual" situations that were calibrated from parameter values found in the literature, with the purpose of providing insight about the response of the model to different configurations of input parameters. This approach - evaluating the answer for a "virtual" problem whose solution could be known in advance - might be useful in evaluating the propagation of uncertainty in mixing models in sediment fingerprinting studies. In this communication, we present the preliminary results of an on-going study evaluating the effect of variability of element concentrations in source materials, sampling density, and the number of elements included in the mixing models. For this study a virtual catchment was constructed, composed by three sub-catchments each of 500 x 500 m size. We assumed that there was no selectivity in sediment detachment or transport. A numerical excercise was performed considering these variables: 1) variability of element concentration: three levels with CVs of 20 %, 50 % and 80 %; 2) sampling density: 10, 25 and 50 "samples" per sub-catchment and element; and 3) number of elements included in the mixing model: two (determined), and five (overdetermined). This resulted in a total of 18 (3 x 3 x 2) possible combinations. The five fingerprinting elements considered in the study were: C, N, 40K, Al and Pavail, and their average values, taken from the literature, were: sub-catchment 1: 4.0 %, 0.35 %, 0.50 ppm, 5.0 ppm, 1.42 ppm, respectively; sub-catchment 2: 2.0 %, 0.18 %, 0.20 ppm, 10.0 ppm, 0.20 ppm, respectively; and sub-catchment 3: 1.0 %, 0.06 %, 1.0 ppm, 16.0 ppm, 7.8 ppm, respectively. For each sub-catchment, three maps of the spatial distribution of each element was generated using the random generator of Mejia and Rodriguez-Iturbe (1974) as described in Freeze (1980), using the average value and the three different CVs defined above. Each map for each source area and property was generated for a 100 x 100 square grid, each grid cell being 5 m x 5 m. Maps were randomly generated for each property and source area. In doing so, we did not consider the possibility of cross correlation among properties. Spatial autocorrelation was assumed to be weak. The reason for generating the maps was to create a "virtual" situation where all the element concentration values at each point are known. Simultaneously, we arbitrarily determined the percentage of sediment coming from sub-catchments. These values were 30 %, 10 % and 60 %, for sub-catchments 1, 2 and 3, respectively. Using these values, we determined the element concentrations in the sediment. The exercise consisted of creating different sampling strategies in a virtual environment to determine an average value for each of the different maps of element concentration and sub-catchment, under different sampling densities: 200 different average values for the "high" sampling density (average of 50 samples); 400 different average values for the "medium" sampling density (average of 25 samples); and 1,000 different average values for the "low" sampling density (average of 10 samples). All these combinations of possible values of element concentrations in the source areas were solved for the concentration in the sediment already determined for the "true" solution using limSolve (Soetaert et al., 2014) in R language. The sediment source solutions found for the different situations and values were analyzed in order to: 1) evaluate the uncertainty in the sediment source attribution; and 2) explore strategies to detect the most probable solutions that might lead to improved methods for constructing the most robust mixing models. Preliminary results on these will be presented and discussed in this communication. Key words: sediment, fingerprinting, uncertainty, variability, mixing model. References Collins, A.L., Zhang, Y., McChesney, D., Walling, D.E., Haley, S.M., Smith, P. 2012. Sediment source tracing in a lowland agricultural catchment in southern England using a modified procedure combining statistical analysis and numerical modelling. Science of the Total Environment 414: 301-317. Freeze, R.A. 1980. A stochastic-conceptual analysis of rainfall-runoff processes on a hillslope. Water Resources Research 16: 391-408.
Probability and Quantum Paradigms: the Interplay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kracklauer, A. F.
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a fewmore » details, this variant is appealing in its reliance on well tested concepts and technology.« less
Probability and Quantum Paradigms: the Interplay
NASA Astrophysics Data System (ADS)
Kracklauer, A. F.
2007-12-01
Since the introduction of Born's interpretation of quantum wave functions as yielding the probability density of presence, Quantum Theory and Probability have lived in a troubled symbiosis. Problems arise with this interpretation because quantum probabilities exhibit features alien to usual probabilities, namely non Boolean structure and non positive-definite phase space probability densities. This has inspired research into both elaborate formulations of Probability Theory and alternate interpretations for wave functions. Herein the latter tactic is taken and a suggested variant interpretation of wave functions based on photo detection physics proposed, and some empirical consequences are considered. Although incomplete in a few details, this variant is appealing in its reliance on well tested concepts and technology.
LFSPMC: Linear feature selection program using the probability of misclassification
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Marion, B. P.
1975-01-01
The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.
Water quality in the proposed Prosperity Reservoir area, Center Creek Basin, Missouri
Barks, James H.; Berkas, Wayne R.
1979-01-01
Water in Center Creek basin, Mo., upstream from the proposed Prosperity Reservoir damsite is a calcium bicarbonate type that is moderately mineralized, hard, and slightly alkaline. Ammonia and organic nitrogen, phosphorus, total organic carbon, chemical oxygen demand, and bacteria increased considerably during storm runoff, probably due to livestock wastes. Nitrogen and phosphorus concentrations are probably high enough to cause the proposed lake to be eutrophic. Minor-element concentrations were at or near normal levels in Center and Jones Creeks. The only pesticides detected were 0.01 micrograms per liter of 2, 4, 5-T in one base-flow sample and 0.02 to 0.04 micrograms per liter of 2, 4, 5-T and 2, 4-D in all storm-runoff samples. Fecal coliform and fecal streptococcus densities ranged from 2 to 650 and 2 to 550 colonies per 100 milliliters, respectively, during base flow , but were 17,000 to 45,000 and 27,000 to 70,000 colonies per 100 milliliters, respectively, during storm runoff. Water in Center Creek about 2.5 miles downstream from the proposed damsite is similar in quality to that upstream from the damsite except for higher concentrations of sodium, sulfate, chloride, fluoride, nitrogen, and phosphorus. These higher concentrations are caused by fertilizer industry wastes that enter Center Creek about 1.0 mile downstream from the proposed damsite. (Woodard-USGS).
Analysis and generation of groundwater concentration time series
NASA Astrophysics Data System (ADS)
Crăciun, Maria; Vamoş, Călin; Suciu, Nicolae
2018-01-01
Concentration time series are provided by simulated concentrations of a nonreactive solute transported in groundwater, integrated over the transverse direction of a two-dimensional computational domain and recorded at the plume center of mass. The analysis of a statistical ensemble of time series reveals subtle features that are not captured by the first two moments which characterize the approximate Gaussian distribution of the two-dimensional concentration fields. The concentration time series exhibit a complex preasymptotic behavior driven by a nonstationary trend and correlated fluctuations with time-variable amplitude. Time series with almost the same statistics are generated by successively adding to a time-dependent trend a sum of linear regression terms, accounting for correlations between fluctuations around the trend and their increments in time, and terms of an amplitude modulated autoregressive noise of order one with time-varying parameter. The algorithm generalizes mixing models used in probability density function approaches. The well-known interaction by exchange with the mean mixing model is a special case consisting of a linear regression with constant coefficients.
ERIC Educational Resources Information Center
Riggs, Peter J.
2013-01-01
Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
Postfragmentation density function for bacterial aggregates in laminar flow.
Byrne, Erin; Dzul, Steve; Solomon, Michael; Younger, John; Bortz, David M
2011-04-01
The postfragmentation probability density of daughter flocs is one of the least well-understood aspects of modeling flocculation. We use three-dimensional positional data of Klebsiella pneumoniae bacterial flocs in suspension and the knowledge of hydrodynamic properties of a laminar flow field to construct a probability density function of floc volumes after a fragmentation event. We provide computational results which predict that the primary fragmentation mechanism for large flocs is erosion. The postfragmentation probability density function has a strong dependence on the size of the original floc and indicates that most fragmentation events result in clumps of one to three bacteria eroding from the original floc. We also provide numerical evidence that exhaustive fragmentation yields a limiting density inconsistent with the log-normal density predicted in the literature, most likely due to the heterogeneous nature of K. pneumoniae flocs. To support our conclusions, artificial flocs were generated and display similar postfragmentation density and exhaustive fragmentation. ©2011 American Physical Society
Probabilistic Simulation of Stress Concentration in Composite Laminates
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Murthy, P. L. N.; Liaw, D. G.
1994-01-01
A computational methodology is described to probabilistically simulate the stress concentration factors (SCF's) in composite laminates. This new approach consists of coupling probabilistic composite mechanics with probabilistic finite element structural analysis. The composite mechanics is used to probabilistically describe all the uncertainties inherent in composite material properties, whereas the finite element is used to probabilistically describe the uncertainties associated with methods to experimentally evaluate SCF's, such as loads, geometry, and supports. The effectiveness of the methodology is demonstrated by using is to simulate the SCF's in three different composite laminates. Simulated results match experimental data for probability density and for cumulative distribution functions. The sensitivity factors indicate that the SCF's are influenced by local stiffness variables, by load eccentricities, and by initial stress fields.
ERIC Educational Resources Information Center
Heisler, Lori; Goffman, Lisa
2016-01-01
A word learning paradigm was used to teach children novel words that varied in phonotactic probability and neighborhood density. The effects of frequency and density on speech production were examined when phonetic forms were nonreferential (i.e., when no referent was attached) and when phonetic forms were referential (i.e., when a referent was…
On the continuity of the stationary state distribution of DPCM
NASA Astrophysics Data System (ADS)
Naraghi-Pour, Morteza; Neuhoff, David L.
1990-03-01
Continuity and singularity properties of the stationary state distribution of differential pulse code modulation (DPCM) are explored. Two-level DPCM (i.e., delta modulation) operating on a first-order autoregressive source is considered, and it is shown that, when the magnitude of the DPCM prediciton coefficient is between zero and one-half, the stationary state distribution is singularly continuous; i.e., it is not discrete but concentrates on an uncountable set with a Lebesgue measure of zero. Consequently, it cannot be represented with a probability density function. For prediction coefficients with magnitude greater than or equal to one-half, the distribution is pure, i.e., either absolutely continuous and representable with a density function, or singular. This problem is compared to the well-known and still substantially unsolved problem of symmetric Bernoulli convolutions.
Influence of turbulent fluctuations on non-equilibrium chemical reactions in the flow
NASA Astrophysics Data System (ADS)
Molchanov, A. M.; Yanyshev, D. S.; Bykov, L. V.
2017-11-01
In chemically nonequilibrium flows the problem of calculation of sources (formation rates) in equations for chemical species is of utter importance. Formation rate of each component is a non-linear function of mixture density, temperature and concentration of species. Thus the suggestion that the mean rate may be determined via mean values of the flow parameters could lead to significant errors. One of the most accurate approaches here is utilization of probability density function (PDF). In this paper the method for constructing such PDFs is developed. The developed model was verified by comparison with the experimental data. On the example of supersonic combustion it was shown that while the overall effect on the averaged flow field is often negligible, the point of ignition can be considerably shifted up the flow.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
Su, Nan-Yao; Lee, Sang-Hee
2008-04-01
Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.
Scalar decay in two-dimensional chaotic advection and Batchelor-regime turbulence
NASA Astrophysics Data System (ADS)
Fereday, D. R.; Haynes, P. H.
2004-12-01
This paper considers the decay in time of an advected passive scalar in a large-scale flow. The relation between the decay predicted by "Lagrangian stretching theories," which consider evolution of the scalar field within a small fluid element and then average over many such elements, and that observed at large times in numerical simulations, associated with emergence of a "strange eigenmode" is discussed. Qualitative arguments are supported by results from numerical simulations of scalar evolution in two-dimensional spatially periodic, time aperiodic flows, which highlight the differences between the actual behavior and that predicted by the Lagrangian stretching theories. In some cases the decay rate of the scalar variance is different from the theoretical prediction and determined globally and in other cases it apparently matches the theoretical prediction. An updated theory for the wavenumber spectrum of the scalar field and a theory for the probability distribution of the scalar concentration are presented. The wavenumber spectrum and the probability density function both depend on the decay rate of the variance, but can otherwise be calculated from the statistics of the Lagrangian stretching history. In cases where the variance decay rate is not determined by the Lagrangian stretching theory, the wavenumber spectrum for scales that are much smaller than the length scale of the flow but much larger than the diffusive scale is argued to vary as k-1+ρ, where k is wavenumber, and ρ is a positive number which depends on the decay rate of the variance γ2 and on the Lagrangian stretching statistics. The probability density function for the scalar concentration is argued to have algebraic tails, with exponent roughly -3 and with a cutoff that is determined by diffusivity κ and scales roughly as κ-1/2 and these predictions are shown to be in good agreement with numerical simulations.
Graca, Bożena; Szewc, Karolina; Zakrzewska, Danuta; Dołęga, Anna; Szczerbowska-Boruchowska, Magdalena
2017-03-01
Microplastics' (particles size ≤5 mm) sources and fate in marine bottom and beach sediments of the brackish are strongly polluted Baltic Sea have been investigated. Microplastics were extracted using sodium chloride (1.2 g cm -3 ). Their qualitative identification was conducted using micro-Fourier-transform infrared spectroscopy (μFT-IR). Concentration of microplastics varied from 25 particles kg -1 d.w. at the open sea beach to 53 particles kg -1 d.w. at beaches of strongly urbanized bay. In bottom sediments, microplastics concentration was visibly lower compared to beach sediments (0-27 particles kg -1 d.w.) and decreased from the shore to the open, deep-sea regions. The most frequent microplastics dimensions ranged from 0.1 to 2.0 mm, and transparent fibers were predominant. Polyester, which is a popular fabrics component, was the most common type of microplastic in both marine bottom (50%) and beach sediments (27%). Additionally, poly(vinyl acetate) used in shipbuilding as well as poly(ethylene-propylene) used for packaging were numerous in marine bottom (25% of all polymers) and beach sediments (18% of all polymers). Polymer density seems to be an important factor influencing microplastics circulation. Low density plastic debris probably recirculates between beach sediments and seawater in a greater extent than higher density debris. Therefore, their deposition is potentially limited and physical degradation is favored. Consequently, low density microplastics concentration may be underestimated using current methods due to too small size of the debris. This influences also the findings of qualitative research of microplastics which provide the basis for conclusions about the sources of microplastics in the marine environment.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
Derivation of Hunt equation for suspension distribution using Shannon entropy theory
NASA Astrophysics Data System (ADS)
Kundu, Snehasis
2017-12-01
In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.
Transfer-matrix study of a hard-square lattice gas with two kinds of particles and density anomaly
NASA Astrophysics Data System (ADS)
Oliveira, Tiago J.; Stilck, Jürgen F.
2015-09-01
Using transfer matrix and finite-size scaling methods, we study the thermodynamic behavior of a lattice gas with two kinds of particles on the square lattice. Only excluded volume interactions are considered, so that the model is athermal. Large particles exclude the site they occupy and its four first neighbors, while small particles exclude only their site. Two thermodynamic phases are found: a disordered phase where large particles occupy both sublattices with the same probability and an ordered phase where one of the two sublattices is preferentially occupied by them. The transition between these phases is continuous at small concentrations of the small particles and discontinuous at larger concentrations, both transitions are separated by a tricritical point. Estimates of the central charge suggest that the critical line is in the Ising universality class, while the tricritical point has tricritical Ising (Blume-Emery-Griffiths) exponents. The isobaric curves of the total density as functions of the fugacity of small or large particles display a minimum in the disordered phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mysina, N Yu; Maksimova, L A; Ryabukho, V P
Investigated are statistical properties of the phase difference of oscillations in speckle-fields at two points in the far-field diffraction region, with different shapes of the scatterer aperture. Statistical and spatial nonuniformity of the probability density function of the field phase difference is established. Numerical experiments show that, for the speckle-fields with an oscillating alternating-sign transverse correlation function, a significant nonuniformity of the probability density function of the phase difference in the correlation region of the field complex amplitude, with the most probable values 0 and p, is observed. A natural statistical interference experiment using Young diagrams has confirmed the resultsmore » of numerical experiments. (laser applications and other topics in quantum electronics)« less
DOC and DON Dynamics along the Bagmati Drainage Network in Kathmandu Valley
NASA Astrophysics Data System (ADS)
Bhatt, M. P.; McDowell, W. H.
2005-05-01
We studied organic matter dynamics and inorganic chemistry of the Bagmati River in Kathmandu valley, Nepal, to understand the influence of human and geochemical processes on chemical loads along the drainage system. Population density appears to be the most fundamental control on the chemistry of surface waters within the Bagmati drainage system. DOC concentration increases 10-fold with distance downstream (from 2.38 to 23.95 mg/L) and shows a strong relationship with human population density. The composition of river water (nutrients, Cl) suggests that sewage effluent to the river has a major effect on water quality. Concentrations were highest during summer, and lowest during the winter monsoon season. In contrast to DOC, DON concentration shows surprisingly little variation, and tends to decrease in concentration with distance downstream. Ammonium contributes almost all nitrogen in the total dissolved nitrogen fraction and the concentration of nitrate is negligible, probably due to rapid denitrification within the stream channel under relatively low-oxygen conditions. Decreases in sulfate along the stream channel may also be due to the reduction of sulfate to sulfide due to the heavy organic matter loading. Water quality is unacceptable for any use and the whole ecosystem is severely affected within the urban areas. Based on a comparison of downstream and upstream water quality, it appears that human activities along the Bagmati, principally inputs of human sewage, are largely responsible for the changes in surface water chemistry within Kathmandu valley.
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.
2017-08-01
The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.
How the climate limits the wood density of angiosperms
NASA Astrophysics Data System (ADS)
Choi, Jin Woo; Kim, Ho-Young
2017-11-01
Flowering trees have various types of wood structure to perform multiple functions under their environmental conditions. In addition to transporting water from the roots to the canopy and providing mechanical support, the structure should provide resistance to embolism to maintain soil-plant-atmosphere continuum. By investigating existing data of the resistivity to embolism and wood density of 165 angiosperm species, here we show that the climate can limit the intrinsic properties of trees. Trees living in the dry environments require a high wood density to slow down the pressure decrease as it loses water relatively fast by evaporation. However, building too much tissues will result in the decrease of hydraulic conductivity and moisture concentration around mesophyll cells. To rationalize the biologically observed lower bound of the wood density, we construct a mechanical model to predict the wood density as a function of the vulnerability to embolism and the time for the recovery. Also, we build an artificial system using hydrogel microchannels that can test the probability of embolism as a function of conduit distributions. Our theoretical prediction is shown to be consistent with the results obtained from the artificial system and the biological data.
Litterfall mercury deposition in Atlantic forest ecosystem from SE-Brazil.
Teixeira, Daniel C; Montezuma, Rita C; Oliveira, Rogério R; Silva-Filho, Emmanoel V
2012-05-01
Litterfall is believed to be the major flux of Hg to soils in forested landscapes, yet much less is known about this input on tropical environment. The Hg litterfall flux was measured during one year in Atlantic Forest fragment, located within Rio de Janeiro urban perimeter, in the Southeastern region of Brazil. The results indicated a mean annual Hg concentration of 238 ± 52 ng g(-1) and a total annual Hg deposition of 184 ± 8.2 μg m(-2) y(-1). The negative correlation observed between rain precipitation and Hg concentrations is probably related to the higher photosynthetic activity observed during summer. The total Hg concentration in leaves from the most abundant species varied from 60 to 215 ng g(-1). Hg concentration showed a positive correlation with stomatal and trichomes densities. These characteristics support the hypothesis that Tropical Forest is an efficient mercury sink and litter plays a key role in Hg dynamics. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Qiangkun; Hu, Yawei; Jia, Qian; Song, Changji
2018-02-01
It is the key point of quantitative research on agricultural non-point source pollution load, the estimation of pollutant concentration in agricultural drain. In the guidance of uncertainty theory, the synthesis of fertilization and irrigation is used as an impulse input to the farmland, meanwhile, the pollutant concentration in agricultural drain is looked as the response process corresponding to the impulse input. The migration and transformation of pollutant in soil is expressed by Inverse Gaussian Probability Density Function. The law of pollutants migration and transformation in soil at crop different growth periods is reflected by adjusting parameters of Inverse Gaussian Distribution. Based on above, the estimation model for pollutant concentration in agricultural drain at field scale was constructed. Taking the of Qing Tong Xia Irrigation District in Ningxia as an example, the concentration of nitrate nitrogen and total phosphorus in agricultural drain was simulated by this model. The results show that the simulated results accorded with measured data approximately and Nash-Sutcliffe coefficients were 0.972 and 0.964, respectively.
Primary gamma rays. [resulting from cosmic ray interaction with interstellar matter
NASA Technical Reports Server (NTRS)
Fichtel, C. E.
1974-01-01
Within this galaxy, cosmic rays reveal their presence in interstellar space and probably in source regions by their interactions with interstellar matter which lead to gamma rays with a very characteristic energy spectrum. From the study of the intensity of the high energy gamma radiation as a function of galactic longitude, it is already clear that cosmic rays are almost certainly not uniformly distributed in the galaxy and are not concentrated in the center of the galaxy. The galactic cosmic rays appear to be tied to galactic structural features, presumably by the galactic magnetic fields which are in turn held by the matter in the arm segments and the clouds. On the extragalactic scale, it is now possible to say that cosmic rays are not universal at the density seen near the earth. The diffuse celestial gamma ray spectrum that is observed presents the interesting possibility of cosmological studies and possible evidence for a residual universal cosmic ray density, which is much lower than the present galactic cosmic ray density.
DCMDN: Deep Convolutional Mixture Density Network
NASA Astrophysics Data System (ADS)
D'Isanto, Antonio; Polsterer, Kai Lars
2017-09-01
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Coincidence probability as a measure of the average phase-space density at freeze-out
NASA Astrophysics Data System (ADS)
Bialas, A.; Czyz, W.; Zalewski, K.
2006-02-01
It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.
Calmettes, Claire; Gabriel, Frederic; Blanchard, Elodie; Servant, Vincent; Bouchet, Stéphane; Kabore, Nathanael; Forcade, Edouard; Leroyer, Camille; Bidet, Audrey; Latrabe, Valérie; Leguay, Thibaut; Vigouroux, Stephane; Tabrizi, Reza; Breilh, Dominique; Accoceberry, Isabelle; de Lara, Manuel Tunon; Pigneux, Arnaud; Milpied, Noel; Dumas, Pierre-Yves
2018-01-01
Posaconazole prophylaxis has demonstrated efficacy in the prevention of invasive aspergillosis during prolonged neutropenia following acute myeloid leukemia induction chemotherapy. Antifungal treatment decreases serum galactomannan enzyme immunoassay diagnostic accuracy that could delay the diagnosis and treatment. We retrospectively studied patients with acute myeloid leukemia who underwent intensive chemotherapy and antifungal prophylaxis by posaconazole oral suspension. Clinical, radiological, microbiological features and treatment response of patients with invasive aspergillosis that occurred despite posaconazole prophylaxis were analyzed. Diagnostic accuracy of serum galactomannan assay according to posaconazole plasma concentrations has been performed. A total of 288 patients with acute myeloid leukemia, treated by induction chemotherapy, who received posaconazole prophylaxis for more than five days were included in the present study. The incidence of invasive aspergillosis was 8% with 12 (4.2%), 8 (2.8%) and 3 (1%), possible, probable and proven invasive aspergillosis, respectively. Posaconazole plasma concentration was available for 258 patients. Median duration of posaconazole treatment was 17 days, and median posaconazole plasma concentration was 0.5 mg/L. None of patients with invasive aspergillosis and posaconazole concentration ≥ 0.5 mg/L had a serum galactomannan positive test. Sensitivity of serum galactomannan assay to detect probable and proven invasive aspergillosis was 81.8%. Decreasing the cut-off value for serum galactomannan optical density index from 0.5 to 0.3 increased sensitivity to 90.9%. In a homogenous cohort of acute myeloid leukemia patients during induction chemotherapy, increasing the posaconazole concentration decreases the sensitivity of serum galactomannan assay.
Wang, Yu-Lin; Wang, Ying; Yi, Hai-Bo
2016-07-21
In this study, the structural characteristics of high-coordinated Ca-Cl complexes present in mixed CaCl2-LiCl aqueous solution were investigated using density functional theory (DFT) and molecular dynamics (MD) simulations. The DFT results show that [CaClx](2-x) (x = 4-6) clusters are quite unstable in the gas phase, but these clusters become metastable when hydration is considered. The MD simulations show that high-coordinated Ca-chloro complexes are possible transient species that exist for up to nanoseconds in concentrated (11.10 mol·kg(-1)) Cl(-) solution at 273 and 298 K. As the temperature increases to 423 K, these high-coordinated structures tend to disassociate and convert into smaller clusters and single free ions. The presence of high-order Ca-Cl species in concentrated LiCl solution can be attributed to their enhanced hydration shell and the inadequate hydration of ions. The probability of the [CaClx](2-x)aq (x = 4-6) species being present in concentrated LiCl solution decreases greatly with increasing temperature, which also indicates that the formation of the high-coordinated Ca-Cl structure is related to its hydration characteristics.
Chlorine dioxide reactions with indoor materials during building disinfection: surface uptake.
Hubbard, Heidi; Poppendieck, Dustin; Corsi, Richard L
2009-03-01
Chlorine dioxide received attention as a building disinfectant in the wake of Bacillus anthracis contamination of several large buildings in the fall of 2001. It is increasingly used for the disinfection of homes and other indoor environments afflicted by mold. However, little is known regarding the interaction of chlorine dioxide and indoor materials, particularly as related to the removal of chlorine dioxide from air. Such removal may be undesirable with respect to the subsequent formation of localized zones of depleted disinfectant concentrations and potential reductions in disinfection effectiveness in a building. The focus of this paper is on chlorine dioxide removal from air to each of 24 different indoor materials. Experiments were completed with materials housed in flow-through 48-L stainless steel chambers under standard conditions of 700 ppm chlorine dioxide inlet concentration, 75% relative humidity, 24 degrees C, and 0.5 h(-1) air changes. Chlorine dioxide concentration profiles, deposition velocities, and reaction probabilities are described in this paper. Deposition velocities and reaction probabilities varied over approximately 2 orders of magnitude across all materials. For most materials, deposition velocity decreased significantly over a 16-h disinfection period; that is, materials became smaller sinks for chlorine dioxide with time. Four materials (office partition, ceiling tile, medium density fiberboard, and gypsum wallboard) accounted for the most short- and long-term consumption of chlorine dioxide. Deposition velocity was observed to be a strong function of chlorine dioxide inlet concentration, suggesting the potential importance of chemical reactions on or within test materials.
Pope, L.M.; Putnam, J.E.
1997-01-01
A study of urban-related water-qulity effects in the Kansas River, Shunganunga Creek Basin, and Soldier Creek in Topeka, Kansas, was conducted from October 1993 through September 1995. The purpose of this report is to assess the effects of urbanization on instream concentrations of selected physical and chemical constituents within the city of Topeka. A network of seven sampling sites was established in the study area. Samples principally were collected at monthly intervals from the Kansas River and from the Shunganunga Creek Basin, and at quarterly intervals from Soldier Creek. The effects of urbanization werestatistically evaluated from differences in constituent concentrations between sites on the same stream. No significant differences in median concentrations of dissolved solids, nutrients, or metals and trace elements, or median densities offecal bacteria were documented between sampling sites upstream and downstream from the major urbanized length of the Kansas River in Topeka.Discharge from the city's primary wastewater- treatment plant is the largest potential source of contamination to the Kansas River. This discharge increased concentrations of dissolved ammonia, totalphosphorus, and densities of fecal bacteria.Calculated dissolved ammonia as nitrogen concentrations in water from the Kansas River ranged from 0.03 to 1.1 milligrams per liter after receiving treatment-plant discharge. However, most of the calculated concentrations wereconsiderably less than 50 percent of Kansas Department of Health and Environment water- quality criteria, with a median value of 20 percent.Generally, treatment-plant discharge increased calculated total phosphorus concentrations in water from the Kansas River by 0.01 to 0.04 milligrams per liter, with a median percentage increase of 7.6 percent. The calculated median densities of fecal coliform and fecal Streptococci bacteria in water from the Kansas River increased from 120 and 150colonies per 100 milliliters of water, respectively, before treatment-plant discharge to a calculated 4,900 and 4,700 colonies per 100 milliliters of water, respectively, after discharge. Median concentrations of dissolved solids were not significantly different between three sampling sites in the Shunganunga Creek Basin. Median concentrations of dissolved nitrate as nitrogen, total phosphorus, and dissolved orthophosphate were significantly larger in water from the upstream- most Shunganunga Creek sampling site than in water from either of the other sampling sites in the Shunganunga Creek Basin probably because of the site's proximity to a wastewater-treatment plant.Median concentrations of dissolved nitrate as nitrogen and total phosphorus during 1993-95 at upstream sampling sites were either significantlylarger than during 1979-81 in response to increase of wastewater-treatment plant discharge or smaller because of the elimination of wastewater-treatment plant discharge. Median concentrations of dissolved ammonia as nitrogen were significantly less during 1993-95 than during 1979-81. Median concentrations of total aluminum, iron, maganese, and molybdenum were significantly larger in water from the downstream-mostShunganunga Creek sampling site than in water from the upstream-most sampling site. This probably reflects their widespread use in the urbanenvironment between the upstream and downstream Shunganunga Creek sampling sites. Little water-quality effect from the urbanization was indicated by results from the Soldier Creek sampling site. Median concentrations of most water-quality constituents in water from this sampling site were the smallest in water from any sampling site in the study area. Herbicides were detected in water from all sampling sites. Some of the more frequently detected herbicides included acetochlor, alachlor,atrazine, cyanazine, EPTC, metolachlor, prometon, simazine, and tebuthiuron. Detected insecticides including chlordane,
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing algorithms in accuracy and efficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
2016-01-01
High initial cell density is used to increase volumetric productivity and shorten production time in lignocellulosic hydrolysate fermentation. Comparison of physiological parameters in high initial cell density cultivation of Saccharomyces cerevisiae in the presence of acetic, formic, levulinic and cinnamic acids demonstrated general and acid-specific responses of cells. All the acids studied impaired growth and inhibited glycolytic flux, and caused oxidative stress and accumulation of trehalose. However, trehalose may play a role other than protecting yeast cells from acid-induced oxidative stress. Unlike the other acids, cinnamic acid did not cause depletion of cellular ATP, but abolished the growth of yeast on ethanol. Compared with low initial cell density, increasing initial cell density reduced the lag phase and improved the bioconversion yield of cinnamic acid during acid adaptation. In addition, yeast cells were able to grow at elevated concentrations of acid, probable due to the increase in phenotypic cell-to-cell heterogeneity in large inoculum size. Furthermore, the specific growth rate and the specific rates of glucose consumption and metabolite production were significantly lower than at low initial cell density, which was a result of the accumulation of a large fraction of cells that persisted in a viable but non-proliferating state. PMID:27620460
Tuning the formation of p-type defects by peroxidation of CuAlO2 films
NASA Astrophysics Data System (ADS)
Luo, Jie; Lin, Yow-Jon; Hung, Hao-Che; Liu, Chia-Jyi; Yang, Yao-Wei
2013-07-01
p-type conduction of CuAlO2 thin films was realized by the rf sputtering method. Combining with Hall, X-ray photoelectron spectroscopy, energy dispersive spectrometer, and X-ray diffraction results, a direct link between the hole concentration, Cu vacancy (VCu), and interstitial oxygen (Oi) was established. It is shown that peroxidation of CuAlO2 films may lead to the increased formation probability of acceptors (VCu and Oi), thus, increasing the hole concentration. The dependence of the VCu density on growth conditions was identified for providing a guide to tune the formation of p-type defects in CuAlO2. Understanding the defect-related p-type conductivity of CuAlO2 is essential for designing optoelectronic devices and improving their performance.
Intermittent particle distribution in synthetic free-surface turbulent flows.
Ducasse, Lauris; Pumir, Alain
2008-06-01
Tracer particles on the surface of a turbulent flow have a very intermittent distribution. This preferential concentration effect is studied in a two-dimensional synthetic compressible flow, both in the inertial (self-similar) and in the dissipative (smooth) range of scales, as a function of the compressibility C . The second moment of the concentration coarse grained over a scale r , n_{r};{2} , behaves as a power law in both the inertial and the dissipative ranges of scale, with two different exponents. The shapes of the probability distribution functions of the coarse-grained density n_{r} vary as a function of scale r and of compressibility C through the combination C/r;{kappa} (kappa approximately 0.5) , corresponding to the compressibility, coarse grained over a domain of scale r , averaged over Lagrangian trajectories.
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
Learning the dynamics of objects by optimal functional interpolation.
Ahn, Jong-Hoon; Kim, In Young
2012-09-01
Many areas of science and engineering rely on functional data and their numerical analysis. The need to analyze time-varying functional data raises the general problem of interpolation, that is, how to learn a smooth time evolution from a finite number of observations. Here, we introduce optimal functional interpolation (OFI), a numerical algorithm that interpolates functional data over time. Unlike the usual interpolation or learning algorithms, the OFI algorithm obeys the continuity equation, which describes the transport of some types of conserved quantities, and its implementation shows smooth, continuous flows of quantities. Without the need to take into account equations of motion such as the Navier-Stokes equation or the diffusion equation, OFI is capable of learning the dynamics of objects such as those represented by mass, image intensity, particle concentration, heat, spectral density, and probability density.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing
2018-03-01
The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
NASA Astrophysics Data System (ADS)
Nironi, Chiara; Salizzoni, Pietro; Marro, Massimo; Mejean, Patrick; Grosjean, Nathalie; Soulhac, Lionel
2015-09-01
The prediction of the probability density function (PDF) of a pollutant concentration within atmospheric flows is of primary importance in estimating the hazard related to accidental releases of toxic or flammable substances and their effects on human health. This need motivates studies devoted to the characterization of concentration statistics of pollutants dispersion in the lower atmosphere, and their dependence on the parameters controlling their emissions. As is known from previous experimental results, concentration fluctuations are significantly influenced by the diameter of the source and its elevation. In this study, we aim to further investigate the dependence of the dispersion process on the source configuration, including source size, elevation and emission velocity. To that end we study experimentally the influence of these parameters on the statistics of the concentration of a passive scalar, measured at several distances downwind of the source. We analyze the spatial distribution of the first four moments of the concentration PDFs, with a focus on the variance, its dissipation and production and its spectral density. The information provided by the dataset, completed by estimates of the intermittency factors, allow us to discuss the role of the main mechanisms controlling the scalar dispersion and their link to the form of the PDF. The latter is shown to be very well approximated by a Gamma distribution, irrespective of the emission conditions and the distance from the source. Concentration measurements are complemented by a detailed description of the velocity statistics, including direct estimates of the Eulerian integral length scales from two-point correlations, a measurement that has been rarely presented to date.
Stochastic approach to the derivation of emission limits for wastewater treatment plants.
Stransky, D; Kabelkova, I; Bares, V
2009-01-01
Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.
A k-omega multivariate beta PDF for supersonic turbulent combustion
NASA Technical Reports Server (NTRS)
Alexopoulos, G. A.; Baurle, R. A.; Hassan, H. A.
1993-01-01
In a recent attempt by the authors at predicting measurements in coaxial supersonic turbulent reacting mixing layers involving H2 and air, a number of discrepancies involving the concentrations and their variances were noted. The turbulence model employed was a one-equation model based on the turbulent kinetic energy. This required the specification of a length scale. In an attempt at detecting the cause of the discrepancy, a coupled k-omega joint probability density function (PDF) is employed in conjunction with a Navier-Stokes solver. The results show that improvements resulting from a k-omega model are quite modest.
Uranium distribution and 'excessive' U-He ages in iron meteoritic troilite
NASA Technical Reports Server (NTRS)
Fisher, D. E.
1985-01-01
Fission tracking techniques were used to measure the uranium distribution in meteoritic troilite and graphite. The obtained fission tracking data showed a heterogeneous distribution of tracks with a significant portion of track density present in the form of uranium clusters at least 10 microns in size. The matrix containing the clusters was also heterogeneous in composition with U concentrations of about 0.2-4.7 ppb. U/He ages could not be estimated on the basis of the heterogeneous U distributions, so previously reported estimates of U/He ages in the presolar range are probably invalid.
2008-05-01
efforts de gestion rétroactive des situations d’urgence actuelles comportant l’mission cachée d’agents chi- miques, biologiques et radiologiques (CBR...Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2008 Original signed by E. Yee Original...context of the source reconstruction problem. DRDC Suffield TR 2008-077 i Résumé On a étudié les relations entre des moments variés de
The origin of anomalous transport in porous media - is it possible to make a priori predictions?
NASA Astrophysics Data System (ADS)
Bijeljic, Branko; Blunt, Martin
2013-04-01
Despite the range of significant applications of flow and solute transport in porous rock, including contaminant migration in subsurface hydrology, geological storage of carbon-dioxide and tracer studies and miscible displacement in oil recovery, even the qualitative behavior in the subsurface is uncertain. The non-Fickian nature of dispersive processes in heterogeneous porous media has been demonstrated experimentally from pore to field scales. However, the exact relationship between structure, velocity field and transport has not been fully understood. Advances in X ray imaging techniques made it possible to accurately describe structure of the pore space, helping predict flow and anomalous transport behaviour using direct simulation. This is demonstrated by simulating solute transport through 3D images of rock samples, with resolutions of a few microns, representing geological media of increasing pore-scale complexity: a sandpack, a sandstone, and a carbonate. A novel methodology is developed that predicts solute transport at the pore scale by using probability density functions of displacement (propagators) and probability density function of transit time between the image voxels, and relates it to probability density function of normalized local velocity. A key advantage is that full information on velocity and solute concentration is retained in the models. The methodology includes solving for Stokes flow by Open Foam, solving for advective transport by the novel streamline simulation method, and superimposing diffusive transport diffusion by the random walk method. It is shown how computed propagators for beadpack, sandstone and carbonate depend on the spread in the velocity distribution. A narrow velocity distribution in the beadpack leads to the least anomalous behaviour where the propagators rapidly become Gaussian; the wider velocity distribution in the sandstone gives rise to a small immobile concentration peak, and a large secondary mobile peak moving at approximately the average flow speed; in the carbonate with the widest velocity distribution the stagnant concentration peak is persistent, while the emergence of a smaller secondary mobile peak is observed, leading to a highly anomalous behavior. This defines different generic nature of non-Fickian transport in the three media and quantifies the effect of pore structure on transport. Moreover, the propagators obtained by the model are in a very good agreement with the propagators measured on beadpack, Bentheimer sandstone and Portland carbonate cores in nuclear magnetic resonance experiments. These findings demonstrate that it is possible to make a priori predictions of anomalous transport in porous media. The importance of these findings for transport in complex carbonate rock micro-CT images is discussed, classifying them in terms of degree of anomalous transport that can have an impact at the field scale. Extensions to reactive transport will be discussed.
Probability density and exceedance rate functions of locally Gaussian turbulence
NASA Technical Reports Server (NTRS)
Mark, W. D.
1989-01-01
A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.
A hybrid probabilistic/spectral model of scalar mixing
NASA Astrophysics Data System (ADS)
Vaithianathan, T.; Collins, Lance
2002-11-01
In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
Smart, Adam S; Tingley, Reid; Weeks, Andrew R; van Rooyen, Anthony R; McCarthy, Michael A
2015-10-01
Effective management of alien species requires detecting populations in the early stages of invasion. Environmental DNA (eDNA) sampling can detect aquatic species at relatively low densities, but few studies have directly compared detection probabilities of eDNA sampling with those of traditional sampling methods. We compare the ability of a traditional sampling technique (bottle trapping) and eDNA to detect a recently established invader, the smooth newt Lissotriton vulgaris vulgaris, at seven field sites in Melbourne, Australia. Over a four-month period, per-trap detection probabilities ranged from 0.01 to 0.26 among sites where L. v. vulgaris was detected, whereas per-sample eDNA estimates were much higher (0.29-1.0). Detection probabilities of both methods varied temporally (across days and months), but temporal variation appeared to be uncorrelated between methods. Only estimates of spatial variation were strongly correlated across the two sampling techniques. Environmental variables (water depth, rainfall, ambient temperature) were not clearly correlated with detection probabilities estimated via trapping, whereas eDNA detection probabilities were negatively correlated with water depth, possibly reflecting higher eDNA concentrations at lower water levels. Our findings demonstrate that eDNA sampling can be an order of magnitude more sensitive than traditional methods, and illustrate that traditional- and eDNA-based surveys can provide independent information on species distributions when occupancy surveys are conducted over short timescales.
Improving effectiveness of systematic conservation planning with density data.
Veloz, Samuel; Salas, Leonardo; Altman, Bob; Alexander, John; Jongsomjit, Dennis; Elliott, Nathan; Ballard, Grant
2015-08-01
Systematic conservation planning aims to design networks of protected areas that meet conservation goals across large landscapes. The optimal design of these conservation networks is most frequently based on the modeled habitat suitability or probability of occurrence of species, despite evidence that model predictions may not be highly correlated with species density. We hypothesized that conservation networks designed using species density distributions more efficiently conserve populations of all species considered than networks designed using probability of occurrence models. To test this hypothesis, we used the Zonation conservation prioritization algorithm to evaluate conservation network designs based on probability of occurrence versus density models for 26 land bird species in the U.S. Pacific Northwest. We assessed the efficacy of each conservation network based on predicted species densities and predicted species diversity. High-density model Zonation rankings protected more individuals per species when networks protected the highest priority 10-40% of the landscape. Compared with density-based models, the occurrence-based models protected more individuals in the lowest 50% priority areas of the landscape. The 2 approaches conserved species diversity in similar ways: predicted diversity was higher in higher priority locations in both conservation networks. We conclude that both density and probability of occurrence models can be useful for setting conservation priorities but that density-based models are best suited for identifying the highest priority areas. Developing methods to aggregate species count data from unrelated monitoring efforts and making these data widely available through ecoinformatics portals such as the Avian Knowledge Network will enable species count data to be more widely incorporated into systematic conservation planning efforts. © 2015, Society for Conservation Biology.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Continuous-time random-walk model for financial distributions
NASA Astrophysics Data System (ADS)
Masoliver, Jaume; Montero, Miquel; Weiss, George H.
2003-02-01
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollar deutsche mark future exchange, finding good agreement between theory and the observed data.
Maret, Terry R.; MacCoy, Dorene E.
2002-01-01
As part of the U.S. Geological Survey's National Water Quality Assessment Program, fish assemblages, environmental variables, and associated mine densities were evaluated at 18 test and reference sites during the summer of 2000 in the Coeur d'Alene and St. Regis river basins in Idaho and Montana. Multimetric and multivariate analyses were used to examine patterns in fish assemblages and the associated environmental variables representing a gradient of mining intensity. The concentrations of cadmium (Cd), lead (Pb), and zinc (Zn) in water and streambed sediment found at test sites in watersheds where production mine densities were at least 0.2 mines/km2 (in a 500-m stream buffer) were significantly higher than the concentrations found at reference sites. Many of these metal concentrations exceeded Ambient Water Quality Criteria (AWQC) and the Canadian Probable Effect Level guidelines for streambed sediment. Regression analysis identified significant relationships between the production mine densities and the sum of Cd, Pb, and Zn concentrations in water and streambed sediment (r2 = 0.69 and 0.66, respectively; P < 0.01). Zinc was identified as the primary metal contaminant in both water and streambed sediment. Eighteen fish species in the families Salmonidae, Cottidae, Cyprinidae, Catostomidae, Centrarchidae, and Ictaluridae were collected. Principal components analysis of 11 fish metrics identified two distinct groups of sites corresponding to the reference and test sites, predominantly on the basis of the inverse relationship between percent cottids and percent salmonids (r = -0.64; P < 0.05). Streams located downstream from the areas of intensive hard-rock mining in the Coeur d'Alene River basin contained fewer native fish and lower abundances as a result of metal enrichment, not physical habitat degradation. Typically, salmonids were the predominant species at test sites where Zn concentrations exceeded the acute AWQC. Cottids were absent at these sites, which suggests that they are more severely affected by elevated metals than are salmonids.
ERIC Educational Resources Information Center
Storkel, Holly L.; Lee, Su-Yeon
2011-01-01
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighbourhood density, the number of phonologically similar words, in lexical acquisition. Two-word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability…
Influence of Phonotactic Probability/Neighbourhood Density on Lexical Learning in Late Talkers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Schwartz, Richard G.; Shafer, Valerie L.; Marton, Klara
2013-01-01
Background: Toddlers who are late talkers demonstrate delays in phonological and lexical skills. However, the influence of phonological factors on lexical acquisition in toddlers who are late talkers has not been examined directly. Aims: To examine the influence of phonotactic probability/neighbourhood density on word learning in toddlers who were…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, R.G.; Hill, D.E.; Sharp, R.R. Jr.
1978-05-01
During the summer of 1976, 1336 water and 1251 sediment samples were collected for Los Alamos Scientific Laboratory (LASL) from 1356 streams and small lakes or ponds within Shishmaref, Kotzebue, Selawik, and western portion of Shungnak NTMS quadrangles in western Alaska. Both a water and sediment sample were generally obtained from each location at a nominal location density of 1/23 km/sup 2/. Total uranium was measured in waters by fluorometry and in sediments and a few waters by delayed neutron counting at LASL. Uranium concentrations in waters have a mean of 0.31 ppB and a maximum of 9.23 ppB, andmore » sediments exhibit a mean of 3.44 ppM and a maximum of 37.7 ppM. A large number of high-uranium concentrations occur in both water and sediment samples collected in the Selawik Hills. At least two locations within the Selawik Hills appear favorable for further investigation of possible uranium mineralization. A cluster of high-uranium sediments, seen in the Waring Mountains, are probably derived from a lower Cretaceous conglomerate unit which is assocated with known airborne radiometric anomalies. Apparently less favorable areas for further investigation of possible uranium mineralization are also located in the Waring Mountains and Kiana Hills. Additional samples were collected within the Shungnak quadrange to increase the sampling density used elsewhere in the area to about one location per 11 km/sup 2/ (double-density). Contoured plots of uranium concentrations for both waters and sediments were prepared for all double-density sample locations, and then for the even-numbered and odd-numbered locations separately. These plots indicate that the HSSR sampling density of 1/23 km/sup 2/ used in lowland areas of Alaska provide essentially the same definition of relative areal uranium distributions in waters and sediments as seen when the density is doubled. These plots indicate that regional distribution patterns for uranium are well defined without selective sampling of geologic units.« less
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.
Serum homocysteine levels in patients with probable vascular dementia.
Alajbegović, Salem; Lepara, Orhan; Hadžović-Džuvo, Almira; Mutevelić-Turković, Alma; Alajbegović, Lejla; Zaćiragić, Asija; Avdagić, Nesina; Valjevac, Amina; Babić, Nermina; Dervišević, Amela
2017-08-01
Aim To investigate total homocysteine (tHcy) serum concentration in patients with probable vascular dementia (VD) and in agematched controls, as well as to determine an association between tHcy serum concentration and cognitive impairment in patients with probable VD. Methods Serum concentration of tHcy was determined by the Fluorescence Polarization Immunoassay on the AxSYM System. Cognitive impairment was tested by the Mini Mental Status Examination (MMSE) score. Body mass index (BMI) was calculated for each subject included in the study. Results Age, systolic, diastolic blood pressure and BMI did not differ significantly between the two groups. Mean serum tHcy concentration in the control group of subjects was 13.35 µmol/L, while in patients with probable VD it was significantly higher, 19.45 µmol/L (p=0.002). A negative but insignificant association between serum tHcy concentration and cognitive impairment in patients with probable VD was found. Conclusion Increased tHcy concentration in patients with probable VD suggests the possible independent role of Hcy in the pathogenesis of VD. Copyright© by the Medical Assotiation of Zenica-Doboj Canton.
NASA Astrophysics Data System (ADS)
Piotrowska, M. J.; Bodnar, M.
2018-01-01
We present a generalisation of the mathematical models describing the interactions between the immune system and tumour cells which takes into account distributed time delays. For the analytical study we do not assume any particular form of the stimulus function describing the immune system reaction to presence of tumour cells but we only postulate its general properties. We analyse basic mathematical properties of the considered model such as existence and uniqueness of the solutions. Next, we discuss the existence of the stationary solutions and analytically investigate their stability depending on the forms of considered probability densities that is: Erlang, triangular and uniform probability densities separated or not from zero. Particular instability results are obtained for a general type of probability densities. Our results are compared with those for the model with discrete delays know from the literature. In addition, for each considered type of probability density, the model is fitted to the experimental data for the mice B-cell lymphoma showing mean square errors at the same comparable level. For estimated sets of parameters we discuss possibility of stabilisation of the tumour dormant steady state. Instability of this steady state results in uncontrolled tumour growth. In order to perform numerical simulation, following the idea of linear chain trick, we derive numerical procedures that allow us to solve systems with considered probability densities using standard algorithm for ordinary differential equations or differential equations with discrete delays.
MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-21
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes
NASA Astrophysics Data System (ADS)
Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen
2014-03-01
Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.
Competition between harvester ants and rodents in the cold desert
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landeen, D.S.; Jorgensen, C.D.; Smith, H.D.
1979-09-30
Local distribution patterns of three rodent species (Perognathus parvus, Peromyscus maniculatus, Reithrodontomys megalotis) were studied in areas of high and low densities of harvester ants (Pogonomyrmex owyheei) in Raft River Valley, Idaho. Numbers of rodents were greatest in areas of high ant-density during May, but partially reduced in August; whereas, the trend was reversed in areas of low ant-density. Seed abundance was probably not the factor limiting changes in rodent populations, because seed densities of annual plants were always greater in areas of high ant-density. Differences in seasonal population distributions of rodents between areas of high and low ant-densities weremore » probably due to interactions of seed availability, rodent energetics, and predation.« less
Weiss, D J; Geor, R J; Burger, K
1996-06-01
To determine whether furosemide treatment altered the blood flow properties and serum and RBC electrolyte concentrations of Thoroughbreds during submaximal treadmill exercise. Thoroughbreds were subjected to submaximal treadmill exercise with and without treatment with furosemide (1 mg/kg of body weight, IV). 5 healthy Throughbreds that had raced within the past year and had no history of exercise-induced pulmonary hemorrhage. Venous blood samples were obtained before exercise, at treadmill speeds of 9 and 13 m/s, and 10 minutes after exercise, and hemorheologic and electrolyte test results were determined. Hemorheologic changes 60 minutes after furosemide administration included increased PCV, plasma total protein concentration, whole blood viscosity, mean RBC volume, and RBC potassium concentration, and decreased serum potassium concentration, serum chloride concentration, and RBC chloride concentration. Furosemide treatment attenuated the exercise-associated changes in RBC size, serum sodium concentration, serum potassium concentration, RBC potassium and chloride concentrations, and RBC density; exacerbated exercise-associated increases in whole blood viscosity; and had no effect on RBC filterability. The hemorheologic effects of furosemide probably occurred secondary to total body and transmembrane fluid and electrolyte fluxes and would not improve blood flow properties. The beneficial effects of furosemide treatment in reducing the severity of bleeding in horses with exercise-induced pulmonary hemorrhage cannot be explained by improved blood flow properties.
Ground-water flow and water quality in the sand aquifer of Long Beach Peninsula, Washington
Thomas, B.E.
1995-01-01
This report describes a study that was undertaken to improve the understanding of ground-water flow and water quality in the coastal sand aquifer of the Long Beach Peninsula of southwestern Washington. Data collected for the study include monthly water levels at 103 wells and 28 surface-water sites during 1992, and water-quality samples from about 40 wells and 13 surface-water sites in February and July 1992. Ground water generally flows at right angles to a ground-water divide along the spine of the low-lying peninsula. Historical water-level data indicate that there was no long-term decline in the water table from 1974 to 1992. The water quality of shallow ground water was generally good with a few local problems. Natural concentrations of dissolved iron were higher than 0.3 milligrams per liter in about one-third of the samples. The dissolved-solids concentrations were generally low, with a range of 56 to 218 milligrams per liter. No appreciable amount of seawater has intruded into the sand aquifer, chloride concentrations were low, with a maximum of 52 milligrams per liter. Agricultural activities do not appear to have significantly affected the quality of ground water. Concentrations of nutrients were low in the cranberry-growing areas, and selected pesticides were not found above the analytical detection limits. Septic systems probably caused an increase in the concentration of nitrate from medians of less than 0.05 milligrams per liter in areas of low population density to 0.74 milligrams per liter in areas of high density.
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
Redundancy and reduction: Speakers manage syntactic information density
Florian Jaeger, T.
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141
The difference between two random mixed quantum states: exact and asymptotic spectral analysis
NASA Astrophysics Data System (ADS)
Mejía, José; Zapata, Camilo; Botero, Alonso
2017-01-01
We investigate the spectral statistics of the difference of two density matrices, each of which is independently obtained by partially tracing a random bipartite pure quantum state. We first show how a closed-form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix, which is straightforward to compute. Subsequently, we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density (AED) of the difference matrix ensemble, and using Carlson’s theorem, we obtain an expression for its absolute moments. These results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures; in particular, we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Mitra, Rajib; Jordan, Michael I.; Dunbrack, Roland L.
2010-01-01
Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1) input data size and criteria for structure inclusion (resolution, R-factor, etc.); 2) filtering of suspect conformations and outliers using B-factors or other features; 3) secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included); 4) the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5) whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately) have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp. PMID:20442867
Subfield-specific loss of hippocampal N-acetyl aspartate in temporal lobe epilepsy.
Vielhaber, Stefan; Niessen, Heiko G; Debska-Vielhaber, Grazyna; Kudin, Alexei P; Wellmer, Jörg; Kaufmann, Jörn; Schönfeld, Mircea Ariel; Fendrich, Robert; Willker, Wieland; Leibfritz, Dieter; Schramm, Johannes; Elger, Christian E; Heinze, Hans-Jochen; Kunz, Wolfram S
2008-01-01
In patients with mesial temporal lobe epilepsy (MTLE) it remains an unresolved issue whether the interictal decrease in N-acetyl aspartate (NAA) detected by proton magnetic resonance spectroscopy ((1)H-MRS) reflects the epilepsy-associated loss of hippocampal pyramidal neurons or metabolic dysfunction. To address this problem, we applied high-resolution (1)H-MRS at 14.1 Tesla to measure metabolite concentrations in ex vivo tissue slices from three hippocampal subfields (CA1, CA3, dentate gyrus) as well as from the parahippocampal region of 12 patients with MTLE. In contrast to four patients with lesion-caused MTLE, we found a large variance of NAA concentrations in the individual hippocampal regions of patients with Ammon's horn sclerosis (AHS). Specifically, in subfield CA3 of AHS patients despite of a moderate preservation of neuronal cell densities the concentration of NAA was significantly lowered, while the concentrations of lactate, glucose, and succinate were elevated. We suggest that these subfield-specific alterations of metabolite concentrations in AHS are very likely caused by impairment of mitochondrial function and not related to neuronal cell loss. A subfield-specific impairment of energy metabolism is the probable cause for lowered NAA concentrations in sclerotic hippocampi of MTLE patients.
NASA Astrophysics Data System (ADS)
Angraini, Lily Maysari; Suparmi, Variani, Viska Inda
2010-12-01
SUSY quantum mechanics can be applied to solve Schrodinger equation for high dimensional system that can be reduced into one dimensional system and represented in lowering and raising operators. Lowering and raising operators can be obtained using relationship between original Hamiltonian equation and the (super) potential equation. In this paper SUSY quantum mechanics is used as a method to obtain the wave function and the energy level of the Modified Poschl Teller potential. The graph of wave function equation and probability density is simulated by using Delphi 7.0 programming language. Finally, the expectation value of quantum mechanics operator could be calculated analytically using integral form or probability density graph resulted by the programming.
Monte-Carlo computation of turbulent premixed methane/air ignition
NASA Astrophysics Data System (ADS)
Carmen, Christina Lieselotte
The present work describes the results obtained by a time dependent numerical technique that simulates the early flame development of a spark-ignited premixed, lean, gaseous methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. The algorithm described is based upon a sub-model developed by an international automobile research and manufacturing corporation in order to analyze turbulence conditions within internal combustion engines. Several developments and modifications to the original algorithm have been implemented including a revised chemical reaction scheme and the evaluation and calculation of various turbulent flame properties. Solution of the complete set of Navier-Stokes governing equations for a turbulent reactive flow is avoided by reducing the equations to a single transport equation. The transport equation is derived from the Navier-Stokes equations for a joint probability density function, thus requiring no closure assumptions for the Reynolds stresses. A Monte-Carlo method is also utilized to simulate phenomena represented by the probability density function transport equation by use of the method of fractional steps. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on the evaluation of the three primary parameters that influence the initial flame kernel growth-the ignition system characteristics, the mixture composition, and the nature of the flow field. Efforts are concentrated on the effects of moderate to intense turbulence on flames within the distributed reaction zone. Results are presented for lean conditions with the fuel equivalence ratio varying from 0.6 to 0.9. The present computational results, including flame regime analysis and the calculation of various flame speeds, provide excellent agreement with results obtained by other experimental and numerical researchers.
Clam density and scaup feeding behavior in San Pablo Bay, California
Poulton, Victoria K.; Lovvorn, James R.; Takekawa, John Y.
2002-01-01
San Pablo Bay, in northern San Francisco Bay, California, is an important wintering area for Greater (Aythya marila) and Lesser Scaup (A. affinis). We investigated variation in foraging behavior of scaup among five sites in San Pablo Bay, and whether such variation was related to densities of their main potential prey, the clams Potamocorbula amurensis and Macoma balthica. Time-activity budgets showed that scaup spent most of their time sleeping at some sites, and both sleeping and feeding at other sites, with females feeding more than males. In the first half of the observation period (12 January–5 February 2000), percent time spent feeding increased with increasing density of P. amurensis, but decreased with increasing density of M. balthica (diet studies have shown that scaup ate mostly P. amurensis and little or no M. balthica). Densities of M. balthica stayed about the same between fall and spring benthic samples, while densities of P. amurensis declined dramatically at most sites. In the second half of the observation period (7 February–3 March 2000), percent time feeding was no longer strongly related to P. amurensis densities, and dive durations increased by 14%. These changes probably reflected declines of P. amurensis, perhaps as affected by scaup predation. The large area of potential feeding habitat, and alternative prey elsewhere in the estuary, might have resulted in the low correlations between scaup behavior and prey densities in San Pablo Bay. These low correlations made it difficult to identify specific areas of prey concentrations important to scaup.
Fuentes, Cesar Mario; Hernandez, Vladimir
2013-01-01
The aim of this study is to examine the spatial distribution of pedestrian injury collisions and analyse the environmental (social and physical) risk factors in Ciudad Juarez, Mexico. More specifically, this study investigates the influence of land use, density, traffic and socio-economic characteristics. This cross sectional study is based on pedestrian injury collision data that were collected by the Municipal Transit Police during 2008-2009. This research presents an analysis of vehicle-pedestrian collisions and their spatial risk determinants using mixed methods that included (1) spatial/geographical information systems (GIS) analysis of pedestrian collision data and (2) ordinary least squares (OLS) regression analysis to explain the density of pedestrian collisions data. In our model, we found a higher probability for pedestrian collisions in census tracts with population and employment density, large concentration of commercial/retail land uses and older people (65 and more). Interventions to alleviate this situation including transportation planning such as decentralisation of municipal transport system, investment in road infrastructure - density of traffic lights, pedestrian crossing, road design, improves lane demarcation. Besides, land use planning interventions should be implemented in commercial/retail areas, in particular separating pedestrian and vehicular spaces.
Tidal tomography constrains Earth's deep-mantle buoyancy.
Lau, Harriet C P; Mitrovica, Jerry X; Davis, James L; Tromp, Jeroen; Yang, Hsin-Ying; Al-Attar, David
2017-11-15
Earth's body tide-also known as the solid Earth tide, the displacement of the solid Earth's surface caused by gravitational forces from the Moon and the Sun-is sensitive to the density of the two Large Low Shear Velocity Provinces (LLSVPs) beneath Africa and the Pacific. These massive regions extend approximately 1,000 kilometres upward from the base of the mantle and their buoyancy remains actively debated within the geophysical community. Here we use tidal tomography to constrain Earth's deep-mantle buoyancy derived from Global Positioning System (GPS)-based measurements of semi-diurnal body tide deformation. Using a probabilistic approach, we show that across the bottom two-thirds of the two LLSVPs the mean density is about 0.5 per cent higher than the average mantle density across this depth range (that is, its mean buoyancy is minus 0.5 per cent), although this anomaly may be concentrated towards the very base of the mantle. We conclude that the buoyancy of these structures is dominated by the enrichment of high-density chemical components, probably related to subducted oceanic plates or primordial material associated with Earth's formation. Because the dynamics of the mantle is driven by density variations, our result has important dynamical implications for the stability of the LLSVPs and the long-term evolution of the Earth system.
Acoustic trapping of active matter
NASA Astrophysics Data System (ADS)
Takatori, Sho C.; de Dier, Raf; Vermant, Jan; Brady, John F.
2016-03-01
Confinement of living microorganisms and self-propelled particles by an external trap provides a means of analysing the motion and behaviour of active systems. Developing a tweezer with a trapping radius large compared with the swimmers' size and run length has been an experimental challenge, as standard optical traps are too weak. Here we report the novel use of an acoustic tweezer to confine self-propelled particles in two dimensions over distances large compared with the swimmers' run length. We develop a near-harmonic trap to demonstrate the crossover from weak confinement, where the probability density is Boltzmann-like, to strong confinement, where the density is peaked along the perimeter. At high concentrations the swimmers crystallize into a close-packed structure, which subsequently `explodes' as a travelling wave when the tweezer is turned off. The swimmers' confined motion provides a measurement of the swim pressure, a unique mechanical pressure exerted by self-propelled bodies.
Acoustic trapping of active matter
Takatori, Sho C.; De Dier, Raf; Vermant, Jan; Brady, John F.
2016-01-01
Confinement of living microorganisms and self-propelled particles by an external trap provides a means of analysing the motion and behaviour of active systems. Developing a tweezer with a trapping radius large compared with the swimmers' size and run length has been an experimental challenge, as standard optical traps are too weak. Here we report the novel use of an acoustic tweezer to confine self-propelled particles in two dimensions over distances large compared with the swimmers' run length. We develop a near-harmonic trap to demonstrate the crossover from weak confinement, where the probability density is Boltzmann-like, to strong confinement, where the density is peaked along the perimeter. At high concentrations the swimmers crystallize into a close-packed structure, which subsequently ‘explodes' as a travelling wave when the tweezer is turned off. The swimmers' confined motion provides a measurement of the swim pressure, a unique mechanical pressure exerted by self-propelled bodies. PMID:26961816
Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advectivemore » dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.« less
Biochemical and hematologic changes after short-term space flight
NASA Technical Reports Server (NTRS)
Leach, Carolyn S.
1991-01-01
Clinical laboratory data from blood samples obtained from astronauts before and after 28 flights (average duration = 6 days) of the Space Shuttle were analyzed by the paired t-test and the Wilcoxon signed-rank test and compared with data from the Skylab flights (duration = 28, 56, and 84 days). Angiotensin I and aldosterone were elevated immediately after short-term space flights, but the response of angiotensin I was delayed after Skylab flights. Serum calcium was not elevated after Shuttle flights, but magnesium and uric acid decreased after both Shuttle and Skylab. Creatine phosphokinase in serum was reduced after Shuttle but not Skylab flights, probably because exercises to prevent deconditioning were not performed on the Shuttle. Total cholesterol was unchanged after Shuttle flights, but low density lipoprotein cholesterol increased and high density lipoprotein cholesterol decreased. The concentration of red blood cells was elevated after Shuttle flights and reduced after Skylab flights.
Pope, L.M.
1995-01-01
The 15,300-square-mile lower Kansas River Basin in Kansas and Nebraska was investigated, as one of the pilot study units of the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program, to address a variety of water-quality issues. This report describes sanitary quality of streams as defined by concentrations of dissolved oxygen (DO) and densities of a fecal-indicator bacterium, Escherichia coli (E. coli). Sixty-one surface-water sampling sites were chosen for this investigation. Synoptic surveys were conducted in July 1988, November 1988, March 1989, and May 1989 to define the concentrations and diel and seasonal variability in concentrations of DO. Synoptic surveys were conducted in July 1988 and July 1989 to define densities of E. coli. Ancillary data included measurements of specific conductance, pH, water temperature. barometric pressure, and concentrations of nutrients, total organic carbon, chlorophyll, and suspended sediment. Surveys were conducted during stable-flow, dry-weather conditions. During the July 1988 synoptic survey for DO, emphasis was placed on the measurement of DO under maximum stress (high water temperature, low streamflow, and predawn conditions). Of 31 sites sampled just before dawn, 5 had DO concentrations less than the 5.0-milligrams-perliter, l-day minimum warmwater criterion for early life stages as established by the U.S. Environmental Protection Agency (USEPA), and 4 of these 5 sites had concentrations less than the 3.0-milligrams-per-liter criterion for all other life stages. For all four synoptic surveys, a total of 392 DO determinations were made, and 9 (2.3 percent) were less than water-quality criteria. Concentrations of DO less than water-quality criteria in the study unit are localized occurrences and do not reflect regional differences in DO. The most severe DO deficiencies are the result of discharges from wastewater-treatment plants into small tributary streams with inadequate assimilation capacity. Algal respiratory demand in combination with reduced physical reaeration associated with extreme low flow probably also contributes to temporary, localized deficiencies. Densities of E. coli were determined at 57 surface-water sampling sites during the syn- optic survey in July 1988. Results indicate large regional differences in E. coli densities within the study unit. Densities orE. coli in water at 19 sites in the Big Blue River subbasin, exclusive of the Little Blue River subbasin, ranged from 120 to 260,000 col/100 mL (colonies per 100 milliliters), with a median density of 2,400 col/100 mL. Densities at the 11 sites in the Little Blue River ranged from 100 to 30,000 col/100 mL, with a median density of 940 col/100 mL. Densities at the 27 sites in the Kansas River subbasin ranged from less than 1 to 1,000 col/100 mL, with a median density of 88 col/100 mL. Densities at 84 percent of the sites in the Big Blue River subbasin exceeded the USEPA E. coli criterion of 576 col/100 mL for infrequently used full-body contact recreation, and 53 percent exceeded the 2,000 cot/I00 mL fecal coliform criterion for uses other than full-body contact established by the Kansas Department of Health and Environment. Densities at 73 percent of the sites in the Little Blue River subbasin exceeded the 576 col/100 mL E. coli criterion, and 36 percent exceeded the 2,000 col/100 mL fecal coliform criterion. Densities at one of the sites in the Kansas River subbasin exceeded the 576 col/100 mL E. coli criterion, and none exceeded the 2,000 col/100 mL fecal-coliform criterion. The largest densities of E. coli in the study unit were the result of discharges from municipal wastewater-treatment plants; however, densities in the Big Blue and Little Blue River subbasins were generally larger than those in the Kansas River subbasin. These larger densities in the Big Blue and Little Blue River subbasins may have been the result of irrigation return flow from fields where manure was used as a soil
Chiaramonte, Thalita; Tizei, Luiz H G; Ugarte, Daniel; Cotta, Mônica A
2011-05-11
InP nanowire polytypic growth was thoroughly studied using electron microscopy techniques as a function of the In precursor flow. The dominant InP crystal structure is wurtzite, and growth parameters determine the density of stacking faults (SF) and zinc blende segments along the nanowires (NWs). Our results show that SF formation in InP NWs cannot be univocally attributed to the droplet supersaturation, if we assume this variable to be proportional to the ex situ In atomic concentration at the catalyst particle. An imbalance between this concentration and the axial growth rate was detected for growth conditions associated with larger SF densities along the NWs, suggesting a different route of precursor incorporation at the triple phase line in that case. The formation of SFs can be further enhanced by varying the In supply during growth and is suppressed for small diameter NWs grown under the same conditions. We attribute the observed behaviors to kinetically driven roughening of the semiconductor/metal interface. The consequent deformation of the triple phase line increases the probability of a phase change at the growth interface in an effort to reach local minima of system interface and surface energy.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Malik, Karan; Arora, Gurpreet; Singh, Inderbir; Arora, Sandeep
2011-01-01
Aim: Orodispersible tablets also known as fast dissolving tablets disintegrate instantaneously within the mouth and thus can be consumed without water. The present study was aimed to formulate orodispersible tablets of nimesulide by using Lallemantia reylenne seeds as natural superdisintegrant. Materials and Methods: Powdered lallemantia seeds were characterized for powder flow properties (bulk density, tapped density, carr's consolidation index, hausner ratio, angle of repose), swelling index, viscosity, pH, and loss on drying. The prepared tablets were evaluated for different tablet parametric tests, wetting time, water absorption ratio, effective pore radius, porosity, packing fraction, in vitro and in vivo disintegration time, in vitro dissolution and stability studies. Results and Discussion: Increase in Lallementia reylenne concentration had an appreciable effect on tablet hardness and friability which clearly indicated binding potential of the seeds. Water absorption ratio increased with increase in Lallemantia reylenne concentration from batch A1 to A4. Water uptake coupled natural polymer swelling could be the most probable mechanism for concentration dependent reduction in disintegration time by the Lallemantia reylenne seeds. Porosity of the formulated tablets was found to increase from batch A1-A4. The in vitro disintegration results were in line with in vivo disintegration results. Conclusion: It could be concluded that Lallemantia reylenne seeds could be used as natural superdisintegrant in the formulation of orodispersible tablets. PMID:23071942
The maximum entropy method of moments and Bayesian probability theory
NASA Astrophysics Data System (ADS)
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Car accidents induced by a bottleneck
NASA Astrophysics Data System (ADS)
Marzoug, Rachid; Echab, Hicham; Ez-Zahraouy, Hamid
2017-12-01
Based on the Nagel-Schreckenberg model (NS) we study the probability of car accidents to occur (Pac) at the entrance of the merging part of two roads (i.e. junction). The simulation results show that the existence of non-cooperative drivers plays a chief role, where it increases the risk of collisions in the intermediate and high densities. Moreover, the impact of speed limit in the bottleneck (Vb) on the probability Pac is also studied. This impact depends strongly on the density, where, the increasing of Vb enhances Pac in the low densities. Meanwhile, it increases the road safety in the high densities. The phase diagram of the system is also constructed.
Modeling the Effect of Density-Dependent Chemical Interference Upon Seed Germination
Sinkkonen, Aki
2005-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:19330163
Modeling the Effect of Density-Dependent Chemical Interference upon Seed Germination
Sinkkonen, Aki
2006-01-01
A mathematical model is presented to estimate the effects of phytochemicals on seed germination. According to the model, phytochemicals tend to prevent germination at low seed densities. The model predicts that at high seed densities they may increase the probability of seed germination and the number of germinating seeds. Hence, the effects are reminiscent of the density-dependent effects of allelochemicals on plant growth, but the involved variables are germination probability and seedling number. The results imply that it should be possible to bypass inhibitory effects of allelopathy in certain agricultural practices and to increase the efficiency of nature conservation in several plant communities. PMID:18648596
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Myers, Samuel M.; Modine, Normand A.
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.
2013-01-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A
2013-02-01
Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.
Weekly variability of surface CO concentrations in Moscow
NASA Astrophysics Data System (ADS)
Sitnov, S. A.; Adiks, T. G.
2014-03-01
Based on observations of carbon monoxide (CO) concentrations at three Mosekomonitoring stations, we have analyzed the weekly cycle of CO in the surface air of Moscow in 2004-2007. At all stations the minimum long-term mean daily CO values are observed on Sunday. The weekly cycle of CO more clearly manifests itself at the center of Moscow and becomes less clear closer to the outskirts. We have analyzed the reproducibility of the weekly cycle of CO from one year to another, the seasonal dependence, its specific features at different times of day, and the changes in the diurnal cycle of CO during the week. The factors responsible for specific features of the evolution of surface CO concentrations at different observation stations have been analyzed. The empirical probability density functions of CO concentrations on weekdays and at week- end are presented. The regularity of the occurrence of the weekend effect in CO has been investigated and the possible reasons for breaks in weekly cycles have been analyzed. The Kruskal-Wallis test was used to study the statistical significance of intraweek differences in surface CO contents.
ERIC Educational Resources Information Center
Rispens, Judith; Baker, Anne; Duinmeijer, Iris
2015-01-01
Purpose: The effects of neighborhood density (ND) and lexical frequency on word recognition and the effects of phonotactic probability (PP) on nonword repetition (NWR) were examined to gain insight into processing at the lexical and sublexical levels in typically developing (TD) children and children with developmental language problems. Method:…
Toward a microscopic model of bidirectional synaptic plasticity
Castellani, Gastone C.; Bazzani, Armando; Cooper, Leon N
2009-01-01
We show that a 2-step phospho/dephosphorylation cycle for the α-amino-3-hydroxy-5-methyl-4-isoxazole proprionic acid receptor (AMPAR), as used in in vivo learning experiments to assess long-term potentiation (LTP) induction and establishment, exhibits bistability for a wide range of parameters, consistent with values derived from biological literature. The AMPAR model we propose, hence, is a candidate for memory storage and switching behavior at a molecular-microscopic level. Furthermore, the stochastic formulation of the deterministic model leads to a mesoscopic interpretation by considering the effect of enzymatic fluctuations on the Michelis–Menten average dynamics. Under suitable hypotheses, this leads to a stochastic dynamical system with multiplicative noise whose probability density evolves according to a Fokker–Planck equation in the Stratonovich sense. In this approach, the probability density associated with each AMPAR phosphorylation state allows one to compute the probability of any concentration value, whereas the Michaelis–Menten equations consider the average concentration dynamics. We show that bistable dynamics are robust for multiplicative stochastic perturbations and that the presence of both noise and bistability simulates LTP and long-term depression (LTD) behavior. Interestingly, the LTP part of this model has been experimentally verified as a result of in vivo, one-trial inhibitory avoidance learning protocol in rats, that produced the same changes in hippocampal AMPARs phosphorylation state as observed with in vitro induction of LTP with high-frequency stimulation (HFS). A consequence of this model is the possibility of characterizing a molecular switch with a defined biochemical set of reactions showing bistability and bidirectionality. Thus, this 3-enzymes-based biophysical model can predict LTP as well as LTD and their transition rates. The theoretical results can be, in principle, validated by in vitro and in vivo experiments, such as fluorescence measurements and electrophysiological recordings at multiple scales, from molecules to neurons. A further consequence is that the bistable regime occurs only within certain parametric windows, which may simulate a “history-dependent threshold”. This effect might be related to the Bienenstock–Cooper–Munro theory of synaptic plasticity. PMID:19666550
Unification of field theory and maximum entropy methods for learning probability densities
NASA Astrophysics Data System (ADS)
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Lepara, Orhan; Alajbegovic, Azra; Zaciragic, Asija; Nakas-Icindic, Emina; Valjevac, Amina; Lepara, Dzenana; Hadzovic-Dzuvo, Almira; Fajkic, Almir; Kulo, Aida; Sofic, Emin
2009-12-01
Elevated plasma homocysteine (Hcy) levels have been associated with Alzheimer's disease (AD) and cognitive impairment. Studies have shown that Hcy may have direct and indirect neurotoxicity effects. The aim of the study was to investigate serum Hcy concentration in patients with probable AD with age-matched controls and to determine whether there was an association between serum Hcy and C-reactive protein concentration in patients with probable AD. We also aimed to determine whether there was an association between serum tHcy concentration and cognitive impairment in patients with probable AD. Serum concentration of total Hcy was determined by the fluorescence polarization immunoassay on the AxSYM system, and serum C-reactive protein (CRP) concentration was determined by means of particle-enhanced immunonephelometry with the use of BN II analyzer. Cognitive impairment was tested by the MMSE score. Body mass index (BMI) was calculated for each subject included in the study. Age, systolic and diastolic blood pressure and BMI did not differ significantly between the two groups. Mean serum tHcy concentration in the control group of subjects was 12.60 mumol/L, while in patients with probable AD the mean serum tHcy concentration was significantly higher than 16.15 mumol/L (p < 0.01). A significant negative association between serum tHcy concentration and cognitive impairment tested by the MMSE score in patients with probable AD was determined (r = -0.61634; p < 0.001). Positive, although not significant correlation between CRP and serum tHcy concentrations in patients with AD, was observed. Increased tHcy concentration in patients with probable AD, and the established negative correlation between serum tHcy concentration and cognitive damage tested by MMSE score in the same group of patients, suggests the possible independent role of Hcy in the pathogenesis of AD and cognitive impairment associated with this disease.
Phonotactics, Neighborhood Activation, and Lexical Access for Spoken Words
Vitevitch, Michael S.; Luce, Paul A.; Pisoni, David B.; Auer, Edward T.
2012-01-01
Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed. PMID:10433774
Fractional Brownian motion with a reflecting wall
NASA Astrophysics Data System (ADS)
Wada, Alexander H. O.; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior
NASA Astrophysics Data System (ADS)
Schröter, Sandra; Gibson, Andrew R.; Kushner, Mark J.; Gans, Timo; O'Connell, Deborah
2018-01-01
The quantification and control of reactive species (RS) in atmospheric pressure plasmas (APPs) is of great interest for their technological applications, in particular in biomedicine. Of key importance in simulating the densities of these species are fundamental data on their production and destruction. In particular, data concerning particle-surface reaction probabilities in APPs are scarce, with most of these probabilities measured in low-pressure systems. In this work, the role of surface reaction probabilities, γ, of reactive neutral species (H, O and OH) on neutral particle densities in a He-H2O radio-frequency micro APP jet (COST-μ APPJ) are investigated using a global model. It is found that the choice of γ, particularly for low-mass species having large diffusivities, such as H, can change computed species densities significantly. The importance of γ even at elevated pressures offers potential for tailoring the RS composition of atmospheric pressure microplasmas by choosing different wall materials or plasma geometries.
Effects of heterogeneous traffic with speed limit zone on the car accidents
NASA Astrophysics Data System (ADS)
Marzoug, R.; Lakouari, N.; Bentaleb, K.; Ez-Zahraouy, H.; Benyoussef, A.
2016-06-01
Using the extended Nagel-Schreckenberg (NS) model, we numerically study the impact of the heterogeneity of traffic with speed limit zone (SLZ) on the probability of occurrence of car accidents (Pac). SLZ in the heterogeneous traffic has an important effect, typically in the mixture velocities case. In the deterministic case, SLZ leads to the appearance of car accidents even in the low densities, in this region Pac increases with increasing of fraction of fast vehicles (Ff). In the nondeterministic case, SLZ decreases the effect of braking probability Pb in the low densities. Furthermore, the impact of multi-SLZ on the probability Pac is also studied. In contrast with the homogeneous case [X. Li, H. Kuang, Y. Fan and G. Zhang, Int. J. Mod. Phys. C 25 (2014) 1450036], it is found that in the low densities the probability Pac without SLZ (n = 0) is low than Pac with multi-SLZ (n > 0). However, the existence of multi-SLZ in the road decreases the risk of collision in the congestion phase.
Maximum likelihood density modification by pattern recognition of structural motifs
Terwilliger, Thomas C.
2004-04-13
An electron density for a crystallographic structure having protein regions and solvent regions is improved by maximizing the log likelihood of a set of structures factors {F.sub.h } using a local log-likelihood function: (x)+p(.rho.(x).vertline.SOLV)p.sub.SOLV (x)+p(.rho.(x).vertline.H)p.sub.H (x)], where p.sub.PROT (x) is the probability that x is in the protein region, p(.rho.(x).vertline.PROT) is the conditional probability for .rho.(x) given that x is in the protein region, and p.sub.SOLV (x) and p(.rho.(x).vertline.SOLV) are the corresponding quantities for the solvent region, p.sub.H (x) refers to the probability that there is a structural motif at a known location, with a known orientation, in the vicinity of the point x; and p(.rho.(x).vertline.H) is the probability distribution for electron density at this point given that the structural motif actually is present. One appropriate structural motif is a helical structure within the crystallographic structure.
Method for removing atomic-model bias in macromolecular crystallography
Terwilliger, Thomas C [Santa Fe, NM
2006-08-01
Structure factor bias in an electron density map for an unknown crystallographic structure is minimized by using information in a first electron density map to elicit expected structure factor information. Observed structure factor amplitudes are combined with a starting set of crystallographic phases to form a first set of structure factors. A first electron density map is then derived and features of the first electron density map are identified to obtain expected distributions of electron density. Crystallographic phase probability distributions are established for possible crystallographic phases of reflection k, and the process is repeated as k is indexed through all of the plurality of reflections. An updated electron density map is derived from the crystallographic phase probability distributions for each one of the reflections. The entire process is then iterated to obtain a final set of crystallographic phases with minimum bias from known electron density maps.
An empirical probability model of detecting species at low densities.
Delaney, David G; Leung, Brian
2010-06-01
False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.
A PDF closure model for compressible turbulent chemically reacting flows
NASA Technical Reports Server (NTRS)
Kollmann, W.
1992-01-01
The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2007-02-02
glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury
Electrofishing capture probability of smallmouth bass in streams
Dauwalter, D.C.; Fisher, W.L.
2007-01-01
Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
Ciffroy, Philippe; Charlatchka, Rayna; Ferreira, Daniel; Marang, Laura
2013-07-01
The biotic ligand model (BLM) theoretically enables the derivation of environmental quality standards that are based on true bioavailable fractions of metals. Several physicochemical variables (especially pH, major cations, dissolved organic carbon, and dissolved metal concentrations) must, however, be assigned to run the BLM, but they are highly variable in time and space in natural systems. This article describes probabilistic approaches for integrating such variability during the derivation of risk indexes. To describe each variable using a probability density function (PDF), several methods were combined to 1) treat censored data (i.e., data below the limit of detection), 2) incorporate the uncertainty of the solid-to-liquid partitioning of metals, and 3) detect outliers. From a probabilistic perspective, 2 alternative approaches that are based on log-normal and Γ distributions were tested to estimate the probability of the predicted environmental concentration (PEC) exceeding the predicted non-effect concentration (PNEC), i.e., p(PEC/PNEC>1). The probabilistic approach was tested on 4 real-case studies based on Cu-related data collected from stations on the Loire and Moselle rivers. The approach described in this article is based on BLM tools that are freely available for end-users (i.e., the Bio-Met software) and on accessible statistical data treatments. This approach could be used by stakeholders who are involved in risk assessments of metals for improving site-specific studies. Copyright © 2013 SETAC.
Factors controlling volatile organic compounds in dwellings in Melbourne, Australia.
Cheng, M; Galbally, I E; Molloy, S B; Selleck, P W; Keywood, M D; Lawson, S J; Powell, J C; Gillett, R W; Dunne, E
2016-04-01
This study characterized indoor volatile organic compounds (VOCs) and investigated the effects of the dwelling characteristics, building materials, occupant activities, and environmental conditions on indoor VOC concentrations in 40 dwellings located in Melbourne, Australia, in 2008 and 2009. A total of 97 VOCs were identified. Nine VOCs, n-butane, 2-methylbutane, toluene, formaldehyde, acetaldehyde, d-limonene, ethanol, 2-propanol, and acetic acid, accounted for 68% of the sum of all VOCs. The median indoor concentrations of all VOCs were greater than those measured outdoors. The occupant density was positively associated with indoor VOC concentrations via occupant activities, including respiration and combustion. Terpenes were associated with the use of household cleaning and laundry products. A petroleum-like indoor VOC signature of alkanes and aromatics was associated with the proximity of major roads. The indoor VOC concentrations were negatively correlated (P < 0.05) with ventilation. Levels of VOCs in these Australian dwellings were lower than those from previous studies in North America and Europe, probably due to a combination of an ongoing temporal decrease in indoor VOC concentrations and the leakier nature of Australian dwellings. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Integrating resource selection information with spatial capture--recapture
Royle, J. Andrew; Chandler, Richard B.; Sun, Catherine C.; Fuller, Angela K.
2013-01-01
4. Finally, we find that SCR models using standard symmetric and stationary encounter probability models may not fully explain variation in encounter probability due to space usage, and therefore produce biased estimates of density when animal space usage is related to resource selection. Consequently, it is important that space usage be taken into consideration, if possible, in studies focused on estimating density using capture–recapture methods.
ERIC Educational Resources Information Center
Gray, Shelley; Pittman, Andrea; Weinhold, Juliet
2014-01-01
Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…
ERIC Educational Resources Information Center
van der Kleij, Sanne W.; Rispens, Judith E.; Scheper, Annette R.
2016-01-01
The aim of this study was to examine the influence of phonotactic probability (PP) and neighbourhood density (ND) on pseudoword learning in 17 Dutch-speaking typically developing children (mean age 7;2). They were familiarized with 16 one-syllable pseudowords varying in PP (high vs low) and ND (high vs low) via a storytelling procedure. The…
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
Assessing hypotheses about nesting site occupancy dynamics
Bled, Florent; Royle, J. Andrew; Cam, Emmanuelle
2011-01-01
Hypotheses about habitat selection developed in the evolutionary ecology framework assume that individuals, under some conditions, select breeding habitat based on expected fitness in different habitat. The relationship between habitat quality and fitness may be reflected by breeding success of individuals, which may in turn be used to assess habitat quality. Habitat quality may also be assessed via local density: if high-quality sites are preferentially used, high density may reflect high-quality habitat. Here we assessed whether site occupancy dynamics vary with site surrogates for habitat quality. We modeled nest site use probability in a seabird subcolony (the Black-legged Kittiwake, Rissa tridactyla) over a 20-year period. We estimated site persistence (an occupied site remains occupied from time t to t + 1) and colonization through two subprocesses: first colonization (site creation at the timescale of the study) and recolonization (a site is colonized again after being deserted). Our model explicitly incorporated site-specific and neighboring breeding success and conspecific density in the neighborhood. Our results provided evidence that reproductively "successful'' sites have a higher persistence probability than "unsuccessful'' ones. Analyses of site fidelity in marked birds and of survival probability showed that high site persistence predominantly reflects site fidelity, not immediate colonization by new owners after emigration or death of previous owners. There is a negative quadratic relationship between local density and persistence probability. First colonization probability decreases with density, whereas recolonization probability is constant. This highlights the importance of distinguishing initial colonization and recolonization to understand site occupancy. All dynamics varied positively with neighboring breeding success. We found evidence of a positive interaction between site-specific and neighboring breeding success. We addressed local population dynamics using a site occupancy approach integrating hypotheses developed in behavioral ecology to account for individual decisions. This allows development of models of population and metapopulation dynamics that explicitly incorporate ecological and evolutionary processes.
Data-driven probability concentration and sampling on manifold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2016-09-15
A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation methodmore » for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.« less
Mercader, R J; Siegert, N W; McCullough, D G
2012-02-01
Emerald ash borer, Agrilus planipennis Fairmaire (Coleoptera: Buprestidae), a phloem-feeding pest of ash (Fraxinus spp.) trees native to Asia, was first discovered in North America in 2002. Since then, A. planipennis has been found in 15 states and two Canadian provinces and has killed tens of millions of ash trees. Understanding the probability of detecting and accurately delineating low density populations of A. planipennis is a key component of effective management strategies. Here we approach this issue by 1) quantifying the efficiency of sampling nongirdled ash trees to detect new infestations of A. planipennis under varying population densities and 2) evaluating the likelihood of accurately determining the localized spread of discrete A. planipennis infestations. To estimate the probability a sampled tree would be detected as infested across a gradient of A. planipennis densities, we used A. planipennis larval density estimates collected during intensive surveys conducted in three recently infested sites with known origins. Results indicated the probability of detecting low density populations by sampling nongirdled trees was very low, even when detection tools were assumed to have three-fold higher detection probabilities than nongirdled trees. Using these results and an A. planipennis spread model, we explored the expected accuracy with which the spatial extent of an A. planipennis population could be determined. Model simulations indicated a poor ability to delineate the extent of the distribution of localized A. planipennis populations, particularly when a small proportion of the population was assumed to have a higher propensity for dispersal.
On Schrödinger's bridge problem
NASA Astrophysics Data System (ADS)
Friedland, S.
2017-11-01
In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.
Stochastic analysis of concentration field in a wake region.
Yassin, Mohamed F; Elmi, Abdirashid A
2011-02-01
Identifying geographic locations in urban areas from which air pollutants enter the atmosphere is one of the most important information needed to develop effective mitigation strategies for pollution control. Stochastic analysis is a powerful tool that can be used for estimating concentration fluctuation in plume dispersion in a wake region around buildings. Only few studies have been devoted to evaluate applications of stochastic analysis to pollutant dispersion in an urban area. This study was designed to investigate the concentration fields in the wake region using obstacle model such as an isolated building model. We measured concentration fluctuations at centerline of various downwind distances from the source, and different heights with the frequency of 1 KHz. Concentration fields were analyzed stochastically, using the probability density functions (pdf). Stochastic analysis was performed on the concentration fluctuation and the pdf of mean concentration, fluctuation intensity, and crosswind mean-plume dispersion. The pdf of the concentration fluctuation data have shown a significant non-Gaussian behavior. The lognormal distribution appeared to be the best fit to the shape of concentration measured in the boundary layer. We observed that the plume dispersion pdf near the source was shorter than the plume dispersion far from the source. Our findings suggest that the use of stochastic technique in complex building environment can be a powerful tool to help understand the distribution and location of air pollutants.
Density probability distribution functions of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2008-10-01
In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
NASA Astrophysics Data System (ADS)
Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.
2017-06-01
In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.
Modeling molecular mixing in a spatially inhomogeneous turbulent flow
NASA Astrophysics Data System (ADS)
Meyer, Daniel W.; Deb, Rajdeep
2012-02-01
Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.
Characteristics of white LED transmission through a smoke screen
NASA Astrophysics Data System (ADS)
Zheng, Yunfei; Yang, Aiying; Feng, Lihui; Guo, Peng
2018-01-01
The characteristics of white LED transmission through a smoke screen is critical for visible light communication through a smoke screen. Based on the Mie scattering theory, the Monte Carlo transmission model is established. Based on the probability density function, the white LED sampling model is established according to the measured spectrum of a white LED and the distribution angle of the lambert model. The sampling model of smoke screen particle diameter is also established according to its distribution. We simulate numerically the influence the smoke thickness, the smoke concentration and the angle of irradiance of white LED on transmittance of the white LED. We construct a white LED smoke transmission experiment system. The measured result on the light transmittance and the smoke concentration agreed with the simulated result, and demonstrated the validity of simulation model for visible light transmission channel through a smoke screen.
Fractional Brownian motion with a reflecting wall.
Wada, Alexander H O; Vojta, Thomas
2018-02-01
Fractional Brownian motion, a stochastic process with long-time correlations between its increments, is a prototypical model for anomalous diffusion. We analyze fractional Brownian motion in the presence of a reflecting wall by means of Monte Carlo simulations. Whereas the mean-square displacement of the particle shows the expected anomalous diffusion behavior 〈x^{2}〉∼t^{α}, the interplay between the geometric confinement and the long-time memory leads to a highly non-Gaussian probability density function with a power-law singularity at the barrier. In the superdiffusive case α>1, the particles accumulate at the barrier leading to a divergence of the probability density. For subdiffusion α<1, in contrast, the probability density is depleted close to the barrier. We discuss implications of these findings, in particular, for applications that are dominated by rare events.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
Yang, Q.; Jung, H.B.; Culbertson, C.W.; Marvinney, R.G.; Loiselle, M.C.; Locke, D.B.; Cheek, H.; Thibodeau, H.; Zheng, Yen
2009-01-01
In New England, groundwater arsenic occurrence has been linked to bedrock geology on regional scales. To ascertain and quantify this linkage at intermediate (100-101 km) scales, 790 groundwater samples from fractured bedrock aquifers in the greater Augusta, Maine area are analyzed, and 31% of the sampled wells have arsenic concentrations >10 ??g/L. The probability of [As] exceeding 10 ??g/L mapped by indicator kriging is highest in Silurian pelite-sandstone and pelite-limestone units (???40%). This probability differs significantly (p < 0.001) from those in the Silurian - Ordovician sandstone (24%), the Devonian granite (15%), and the Ordovician - Cambrian volcanic rocks (9%). The spatial pattern of groundwater arsenic distribution resembles the bedrock map. Thus, bedrock geology is associated with arsenic occurrence in fractured bedrock aquifers of the study area at intermediate scales relevant to water resources planning. The arsenic exceedance rate for each rock unit is considered robust because low, medium, and high arsenic occurrences in four cluster areas (3-20 km2) with a low sampling density of 1-6 wells per km2 are comparable to those with a greater density of 5-42 wells per km2. About 12,000 people (21% of the population) in the greater Augusta area (???1135 km2) are at risk of exposure to >10 ??g/L arsenic in groundwater. ?? 2009 American Chemical Society.
Yang, Qiang; Jung, Hun Bok; Culbertson, Charles W; Marvinney, Robert G; Loiselle, Marc C; Locke, Daniel B; Cheek, Heidi; Thibodeau, Hilary; Zheng, Yan
2009-04-15
In New England, groundwater arsenic occurrence has been linked to bedrock geology on regional scales. To ascertain and quantify this linkage at intermediate (10(0)-10(1) km) scales, 790 groundwater samples from fractured bedrock aquifers in the greater Augusta, Maine area are analyzed, and 31% of the sampled wells have arsenic concentrations >10 microg/L. The probability of [As] exceeding 10 microg/L mapped by indicator kriging is highest in Silurian pelite-sandstone and pelite-limestone units (approximately 40%). This probability differs significantly (p < 0.001) from those in the Silurian-Ordovician sandstone (24%),the Devonian granite (15%), and the Ordovician-Cambrian volcanic rocks (9%). The spatial pattern of groundwater arsenic distribution resembles the bedrock map. Thus, bedrock geology is associated with arsenic occurrence in fractured bedrock aquifers of the study area at intermediate scales relevant to water resources planning. The arsenic exceedance rate for each rock unit is considered robust because low, medium, and high arsenic occurrences in four cluster areas (3-20 km2) with a low sampling density of 1-6 wells per km2 are comparable to those with a greater density of 5-42 wells per km2. About 12,000 people (21% of the population) in the greater Augusta area (approximately 1135 km2) are at risk of exposure to >10 microg/L arsenic in groundwater.
Biochemical and hematologic changes after short-term space flight
NASA Technical Reports Server (NTRS)
Leach, C. S.
1992-01-01
Clinical laboratory data from blood samples obtained from astronauts before and after 28 flights (average duration = 6 days) of the Space Shuttle were analyzed by the paired t-test and the Wilcoxon signed-rank test and compared with data from the Skylab flights (duration approximately 28, 59, and 84 days). Angiotensin I and aldosterone were elevated immediately after short-term space flights, but the response of angiotensin I was delayed after Skylab flights. Serum calcium was not elevated after Shuttle flights, but magnesium and uric acid decreased after both Shuttle and Skylab. Creatine phosphokinase in serum was reduced after Shuttle but not Skylab flights, probably because exercises to prevent deconditioning were not performed on the Shuttle. Total cholesterol was unchanged after Shuttle flights, but low density lipoprotein cholesterol increased and high density lipoprotein cholesterol decreased. The concentration of red blood cells was elevated after Shuttle flights and reduced after Skylab flights. Reticulocyte count was decreased after both short- and long-term flights, indicating that a reduction in red blood cell mass is probably more closely related to suppression of red cell production than to an increase in destruction of erythrocytes. Serum ferritin and number of platelets were also elevated after Shuttle flights. In determining the reasons for postflight differences between the shorter and longer flights, it is important to consider not only duration but also countermeasures, differences between spacecraft, and procedures for landing and egress.
Encircling the dark: constraining dark energy via cosmic density in spheres
NASA Astrophysics Data System (ADS)
Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.
2016-08-01
The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, M.B.; Lafferty, K.D.; van, Oosterhout C.; Cable, J.
2011-01-01
Background: Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings: Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance: These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density. ?? 2011 Johnson et al.
Parasite transmission in social interacting hosts: Monogenean epidemics in guppies
Johnson, Mirelle B.; Lafferty, Kevin D.; van Oosterhout, Cock; Cable, Joanne
2011-01-01
Background Infection incidence increases with the average number of contacts between susceptible and infected individuals. Contact rates are normally assumed to increase linearly with host density. However, social species seek out each other at low density and saturate their contact rates at high densities. Although predicting epidemic behaviour requires knowing how contact rates scale with host density, few empirical studies have investigated the effect of host density. Also, most theory assumes each host has an equal probability of transmitting parasites, even though individual parasite load and infection duration can vary. To our knowledge, the relative importance of characteristics of the primary infected host vs. the susceptible population has never been tested experimentally. Methodology/Principal Findings Here, we examine epidemics using a common ectoparasite, Gyrodactylus turnbulli infecting its guppy host (Poecilia reticulata). Hosts were maintained at different densities (3, 6, 12 and 24 fish in 40 L aquaria), and we monitored gyrodactylids both at a population and individual host level. Although parasite population size increased with host density, the probability of an epidemic did not. Epidemics were more likely when the primary infected fish had a high mean intensity and duration of infection. Epidemics only occurred if the primary infected host experienced more than 23 worm days. Female guppies contracted infections sooner than males, probably because females have a higher propensity for shoaling. Conclusions/Significance These findings suggest that in social hosts like guppies, the frequency of social contact largely governs disease epidemics independent of host density.
Randomized path optimization for thevMitigated counter detection of UAVS
2017-06-01
using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We
Korman, Josh; Yard, Mike
2017-01-01
Article for outlet: Fisheries Research. Abstract: Quantifying temporal and spatial trends in abundance or relative abundance is required to evaluate effects of harvest and changes in habitat for exploited and endangered fish populations. In many cases, the proportion of the population or stock that is captured (catchability or capture probability) is unknown but is often assumed to be constant over space and time. We used data from a large-scale mark-recapture study to evaluate the extent of spatial and temporal variation, and the effects of fish density, fish size, and environmental covariates, on the capture probability of rainbow trout (Oncorhynchus mykiss) in the Colorado River, AZ. Estimates of capture probability for boat electrofishing varied 5-fold across five reaches, 2.8-fold across the range of fish densities that were encountered, 2.1-fold over 19 trips, and 1.6-fold over five fish size classes. Shoreline angle and turbidity were the best covariates explaining variation in capture probability across reaches and trips. Patterns in capture probability were driven by changes in gear efficiency and spatial aggregation, but the latter was more important. Failure to account for effects of fish density on capture probability when translating a historical catch per unit effort time series into a time series of abundance, led to 2.5-fold underestimation of the maximum extent of variation in abundance over the period of record, and resulted in unreliable estimates of relative change in critical years. Catch per unit effort surveys have utility for monitoring long-term trends in relative abundance, but are too imprecise and potentially biased to evaluate population response to habitat changes or to modest changes in fishing effort.
Wavefronts, actions and caustics determined by the probability density of an Airy beam
NASA Astrophysics Data System (ADS)
Espíndola-Ramos, Ernesto; Silva-Ortigoza, Gilberto; Sosa-Sánchez, Citlalli Teresa; Julián-Macías, Israel; de Jesús Cabrera-Rosas, Omar; Ortega-Vidals, Paula; Alejandro Juárez-Reyes, Salvador; González-Juárez, Adriana; Silva-Ortigoza, Ramón
2018-07-01
The main contribution of the present work is to use the probability density of an Airy beam to identify its maxima with the family of caustics associated with the wavefronts determined by the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a given potential. To this end, we give a classical mechanics characterization of a solution of the one-dimensional Schrödinger equation in free space determined by a complete integral of the Hamilton–Jacobi and Laplace equations in free space. That is, with this type of solution, we associate a two-parameter family of wavefronts in the spacetime, which are the level curves of a one-parameter family of solutions to the Hamilton–Jacobi equation with a determined potential, and a one-parameter family of caustics. The general results are applied to an Airy beam to show that the maxima of its probability density provide a discrete set of: caustics, wavefronts and potentials. The results presented here are a natural generalization of those obtained by Berry and Balazs in 1979 for an Airy beam. Finally, we remark that, in a natural manner, each maxima of the probability density of an Airy beam determines a Hamiltonian system.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of elevated volatile organic compound (VOC) concentrations in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps was developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of elevated nitrate concentrations in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps was developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Carlson, Paul R; Yarbro, Laura A; Madley, Kevin; Arnold, Herman; Merello, Manuel; Vanderbloemen, Lisa; McRae, Gil; Durako, Michael J
2003-01-01
We examined the response of demographic, morphological, and chemical parameters of turtle grass (Thalassia testudinum), to much-higher-than-normal rainfall associated with an El Niño event in the winter of 1997-1998. Up to 20 inches of added rain fell between December 1997 and March 1998. triggering widespread and persistent phytoplankton blooms along the west coast of Florida. Water-column chlorophyll concentrations estimated from serial Sea WiFS imagery were much higher during the El Niño event than in the previous or following years, although the timing and magnitude of phytoplankton blooms varied among sites. Seagrass samples collected in 1997, 1998, and 1999 provided an excellent opportunity to test the responsiveness of Thalassia to decline and subsequent improvement of water quality and clarity in four estuaries. Using a scoring technique based on temporal responsiveness, spatial consistency, and statistical strength of indicators, we found that several morphological parameters (Thalassia shoot density, blade width, blade number, and shoot-specific leaf area) were responsive and consistent measures of light stress. Some morphological parameters, such as rhizome apex density, responded to declines and subsequent improvement in water clarity, but lacked the statistical discriminating power necessary to be useful indicators. However, rhizome sugar, starch, and total carbohydrate concentrations also exhibited spatially and temporally consistent variation as well as statistical strength. Because changes in shoot density, as well as water clarity, affect rhizome carbohydrate levels, a composite metric based on Thalassia shoot density and rhizome carbohydrate levels together is probably more useful than either parameter alone as an indicator of seagrass health.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
Koseki, Shige; Nonaka, Junko
2012-09-01
The objective of this study was to develop a probabilistic model to predict the end of lag time (λ) during the growth of Bacillus cereus vegetative cells as a function of temperature, pH, and salt concentration using logistic regression. The developed λ model was subsequently combined with a logistic differential equation to simulate bacterial numbers over time. To develop a novel model for λ, we determined whether bacterial growth had begun, i.e., whether λ had ended, at each time point during the growth kinetics. The growth of B. cereus was evaluated by optical density (OD) measurements in culture media for various pHs (5.5 ∼ 7.0) and salt concentrations (0.5 ∼ 2.0%) at static temperatures (10 ∼ 20°C). The probability of the end of λ was modeled using dichotomous judgments obtained at each OD measurement point concerning whether a significant increase had been observed. The probability of the end of λ was described as a function of time, temperature, pH, and salt concentration and showed a high goodness of fit. The λ model was validated with independent data sets of B. cereus growth in culture media and foods, indicating acceptable performance. Furthermore, the λ model, in combination with a logistic differential equation, enabled a simulation of the population of B. cereus in various foods over time at static and/or fluctuating temperatures with high accuracy. Thus, this newly developed modeling procedure enables the description of λ using observable environmental parameters without any conceptual assumptions and the simulation of bacterial numbers over time with the use of a logistic differential equation.
Naranjo, Ramon C.; Welborn, Toby L.; Rosen, Michael R.
2013-01-01
The distribution of nitrate as nitrogen (referred to herein as nitrate-N) concentrations in groundwater was determined by collecting more than 200 samples from 8 land-use categories: single family residential, multifamily residential, rural (including land use for agriculture), vacant land, commercial, industrial, utilities, and unclassified. Nitrate-N concentrations ranged from below detection (less than 0.05 milligrams per liter) to 18 milligrams per liter. The results of nitrate-N concentrations that were sampled from three wells equalled or exceeded the maximum contaminant level of 10 milligrams per liter set by the U.S. Environmental Protection Agency. Nitrate-N concentrations in sampled wells showed a positive correlation between elevated nitrate-N concentrations and the percentage of single-family land use and septic-system density. Wells sampled in other land-use categories did not have any correlation to nitrate-N concentrations. In areas with greater than 50-percent single-family land use, nitrate-N concentrations were two times greater than in areas with less than 50 percent single-family land use. Nitrate-N concentrations in groundwater near septic systems that had been used more than 20 years were more than two times greater than in areas where septic systems had been used less than 20 years. Lower nitrate-N concentrations in the areas where septic systems were less than 20 years old probably result from temporary storage of nitrogen leaching from septic systems into the unsaturated zone. In areas where septic systems are abundant, nitrate-N concentrations were predicted to 2059 by using numerical models within the Ruhenstroth and Johnson Lane subdivisions in the Carson Valley. Model results indicated that nitrate-N concentrations will continue to increase and could exceed the maximum contaminant level over extended areas inside and outside the subdivisions. Two modeling scenarios were used to simulate future transport as a result of removal of septic systems (source of nitrate-N contamination) and the termination of domestic pumping of groundwater. The models showed the largest decrease in nitrate-N concentrations when septic systems were removed and wells continued to pump. Nitrate-N concentrations probably will continue to increase in areas that are dependent on septic systems for waste disposal either under current land-use conditions in the valley or with continued growth and change in land use in the valley.
Oak regeneration and overstory density in the Missouri Ozarks
David R. Larsen; Monte A. Metzger
1997-01-01
Reducing overstory density is a commonly recommended method of increasing the regeneration potential of oak (Quercus) forests. However, recommendations seldom specify the probable increase in density or the size of reproduction associated with a given residual overstory density. This paper presents logistic regression models that describe this...
Characterizing the distribution of an endangered salmonid using environmental DNA analysis
Laramie, Matthew B.; Pilliod, David S.; Goldberg, Caren S.
2015-01-01
Determining species distributions accurately is crucial to developing conservation and management strategies for imperiled species, but a challenging task for small populations. We evaluated the efficacy of environmental DNA (eDNA) analysis for improving detection and thus potentially refining the known distribution of Chinook salmon (Oncorhynchus tshawytscha) in the Methow and Okanogan Subbasins of the Upper Columbia River, which span the border between Washington, USA and British Columbia, Canada. We developed an assay to target a 90 base pair sequence of Chinook DNA and used quantitative polymerase chain reaction (qPCR) to quantify the amount of Chinook eDNA in triplicate 1-L water samples collected at 48 stream locations in June and again in August 2012. The overall probability of detecting Chinook with our eDNA method in areas within the known distribution was 0.77 (±0.05 SE). Detection probability was lower in June (0.62, ±0.08 SE) during high flows and at the beginning of spring Chinook migration than during base flows in August (0.93, ±0.04 SE). In the Methow subbasin, mean eDNA concentration was higher in August compared to June, especially in smaller tributaries, probably resulting from the arrival of spring Chinook adults, reduced discharge, or both. Chinook eDNA concentrations did not appear to change in the Okanogan subbasin from June to August. Contrary to our expectations about downstream eDNA accumulation, Chinook eDNA did not decrease in concentration in upstream reaches (0–120 km). Further examination of factors influencing spatial distribution of eDNA in lotic systems may allow for greater inference of local population densities along stream networks or watersheds. These results demonstrate the potential effectiveness of eDNA detection methods for determining landscape-level distribution of anadromous salmonids in large river systems.
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
On Orbital Elements of Extrasolar Planetary Candidates and Spectroscopic Binaries
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
We estimate probability densities of orbital elements, periods, and eccentricities, for the population of extrasolar planetary candidates (EPC) and, separately, for the population of spectroscopic binaries (SB) with solar-type primaries. We construct empirical cumulative distribution functions (CDFs) in order to infer probability distribution functions (PDFs) for orbital periods and eccentricities. We also derive a joint probability density for period-eccentricity pairs in each population. Comparison of respective distributions reveals that in all cases EPC and SB populations are, in the context of orbital elements, indistinguishable from each other to a high degree of statistical significance. Probability densities of orbital periods in both populations have P(exp -1) functional form, whereas the PDFs of eccentricities can he best characterized as a Gaussian with a mean of about 0.35 and standard deviation of about 0.2 turning into a flat distribution at small values of eccentricity. These remarkable similarities between EPC and SB must be taken into account by theories aimed at explaining the origin of extrasolar planetary candidates, and constitute an important clue us to their ultimate nature.
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
On the joint spectral density of bivariate random sequences. Thesis Technical Report No. 21
NASA Technical Reports Server (NTRS)
Aalfs, David D.
1995-01-01
For univariate random sequences, the power spectral density acts like a probability density function of the frequencies present in the sequence. This dissertation extends that concept to bivariate random sequences. For this purpose, a function called the joint spectral density is defined that represents a joint probability weighing of the frequency content of pairs of random sequences. Given a pair of random sequences, the joint spectral density is not uniquely determined in the absence of any constraints. Two approaches to constraining the sequences are suggested: (1) assume the sequences are the margins of some stationary random field, (2) assume the sequences conform to a particular model that is linked to the joint spectral density. For both approaches, the properties of the resulting sequences are investigated in some detail, and simulation is used to corroborate theoretical results. It is concluded that under either of these two constraints, the joint spectral density can be computed from the non-stationary cross-correlation.
How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?
NASA Astrophysics Data System (ADS)
Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.
2013-12-01
In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.
Propensity, Probability, and Quantum Theory
NASA Astrophysics Data System (ADS)
Ballentine, Leslie E.
2016-08-01
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.; Bliven, L. F.; Tung, C.-C.
1984-01-01
On the basis of the mapping method developed by Huang et al. (1983), an analytic expression for the non-Gaussian joint probability density function of slope and elevation for nonlinear gravity waves is derived. Various conditional and marginal density functions are also obtained through the joint density function. The analytic results are compared with a series of carefully controlled laboratory observations, and good agreement is noted. Furthermore, the laboratory wind wave field observations indicate that the capillary or capillary-gravity waves may not be the dominant components in determining the total roughness of the wave field. Thus, the analytic results, though derived specifically for the gravity waves, may have more general applications.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
NASA Astrophysics Data System (ADS)
Mori, Shohei; Hirata, Shinnosuke; Yamaguchi, Tadashi; Hachiya, Hiroyuki
To develop a quantitative diagnostic method for liver fibrosis using an ultrasound B-mode image, a probability imaging method of tissue characteristics based on a multi-Rayleigh model, which expresses a probability density function of echo signals from liver fibrosis, has been proposed. In this paper, an effect of non-speckle echo signals on tissue characteristics estimated from the multi-Rayleigh model was evaluated. Non-speckle signals were determined and removed using the modeling error of the multi-Rayleigh model. The correct tissue characteristics of fibrotic tissue could be estimated with the removal of non-speckle signals.
Laboratory-Tutorial Activities for Teaching Probability
ERIC Educational Resources Information Center
Wittmann, Michael C.; Morgan, Jeffrey T.; Feeley, Roger E.
2006-01-01
We report on the development of students' ideas of probability and probability density in a University of Maine laboratory-based general education physics course called "Intuitive Quantum Physics". Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We…
Wangsness, David J.; Eikenberry, S.E.; Wilber, W.G.; Crawford, Charles G.
1981-01-01
The White River Park Commission is planning the development of park facilities along the White River through Indianapolis, Ind. A key element in the planning is the determination of whether water quality of the river is suitable for recreation. A preliminary water-quality assessment conducted August 4-5, 1980, indicated that, during low-flow steady-state conditions, the river is suitable for partial body contact recreation (any contact with water up to, but not including complete submergence). Dissolved-oxygen concentrations varied but were higher than the Indiana water-quality standards established to ensure conditions for the maintenance of a well-balanced, warm-water fish community. High fecal-coliform densities that have been observed in the White River during high streamflow are probably caused by stormwater runoff carried by combined storm and sanitary sewers. However, during the low-flow, steady-state conditions on August 4-5, 1980, fecal-coliform densities were within the Indiana standards for partial body contact recreation. Quantities of organic matter and concentrations of nutrients and heavy metals in the White River were generally within the limits recommended by the U.S. Environmental Protection Agency and were generally similar to values for other Indiana rivers. Chromium, copper, lead, zinc, and mercury are accumulating in bottom materials downstream from 30th Street. The phytoplankton concentrations in the White River were high. The dominant phytoplankton species were indicative of rivers moderately affected by organic wastes. (USGS)
Stream permanence influences crayfish occupancy and abundance in the Ozark Highlands, USA
Yarra, Allyson N.; Magoulick, Daniel D.
2018-01-01
Crayfish use of intermittent streams is especially important to understand in the face of global climate change. We examined the influence of stream permanence and local habitat on crayfish occupancy and species densities in the Ozark Highlands, USA. We sampled in June and July 2014 and 2015. We used a quantitative kick–seine method to sample crayfish presence and abundance at 20 stream sites with 32 surveys/site in the Upper White River drainage, and we measured associated local environmental variables each year. We modeled site occupancy and detection probabilities with the software PRESENCE, and we used multiple linear regressions to identify relationships between crayfish species densities and environmental variables. Occupancy of all crayfish species was related to stream permanence. Faxonius meeki was found exclusively in intermittent streams, whereas Faxonius neglectus and Faxonius luteushad higher occupancy and detection probability in permanent than in intermittent streams, and Faxonius williamsi was associated with intermittent streams. Estimates of detection probability ranged from 0.56 to 1, which is high relative to values found by other investigators. With the exception of F. williamsi, species densities were largely related to stream permanence rather than local habitat. Species densities did not differ by year, but total crayfish densities were significantly lower in 2015 than 2014. Increased precipitation and discharge in 2015 probably led to the lower crayfish densities observed during this year. Our study demonstrates that crayfish distribution and abundance is strongly influenced by stream permanence. Some species, including those of conservation concern (i.e., F. williamsi, F. meeki), appear dependent on intermittent streams, and conservation efforts should include consideration of intermittent streams as an important component of freshwater biodiversity.
Derivation of an eigenvalue probability density function relating to the Poincaré disk
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Krishnapur, Manjunath
2009-09-01
A result of Zyczkowski and Sommers (2000 J. Phys. A: Math. Gen. 33 2045-57) gives the eigenvalue probability density function for the top N × N sub-block of a Haar distributed matrix from U(N + n). In the case n >= N, we rederive this result, starting from knowledge of the distribution of the sub-blocks, introducing the Schur decomposition and integrating over all variables except the eigenvalues. The integration is done by identifying a recursive structure which reduces the dimension. This approach is inspired by an analogous approach which has been recently applied to determine the eigenvalue probability density function for random matrices A-1B, where A and B are random matrices with entries standard complex normals. We relate the eigenvalue distribution of the sub-blocks to a many-body quantum state, and to the one-component plasma, on the pseudosphere.
NASA Astrophysics Data System (ADS)
Ballestra, Luca Vincenzo; Pacelli, Graziella; Radi, Davide
2016-12-01
We propose a numerical method to compute the first-passage probability density function in a time-changed Brownian model. In particular, we derive an integral representation of such a density function in which the integrand functions must be obtained solving a system of Volterra equations of the first kind. In addition, we develop an ad-hoc numerical procedure to regularize and solve this system of integral equations. The proposed method is tested on three application problems of interest in mathematical finance, namely the calculation of the survival probability of an indebted firm, the pricing of a single-knock-out put option and the pricing of a double-knock-out put option. The results obtained reveal that the novel approach is extremely accurate and fast, and performs significantly better than the finite difference method.
Committor of elementary reactions on multistate systems
NASA Astrophysics Data System (ADS)
Király, Péter; Kiss, Dóra Judit; Tóth, Gergely
2018-04-01
In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Convection due to an unstable density difference across a permeable membrane
NASA Astrophysics Data System (ADS)
Puthenveettil, Baburaj A.; Arakeri, Jaywant H.
We study natural convection driven by unstable concentration differences of sodium chloride (NaCl) across a horizontal permeable membrane at Rayleigh numbers (Ra) of 1010 to 1011 and Schmidt number (Sc)=600. A layer of brine lies over a layer of distilled water, separated by the membrane, in square-cross-section tanks. The membrane is permeable enough to allow a small flow across it at higher driving potentials. Based on the predominant mode of transport across the membrane, three regimes of convection, namely an advection regime, a diffusion regime and a combined regime, are identified. The near-membrane flow in all the regimes consists of sheet plumes formed from the unstable layers of fluid near the membrane. In the advection regime observed at higher concentration differences (Bb) show a common log-normal probability density function at all Ra. We propose a phenomenology which predicts /line{lambda}_b sqrt{Z_w Z_{V_i}}, where Zw and Z_{V_i} are, respectively, the near-wall length scales in Rayleighnard convection (RBC) and due to the advection velocity. In the combined regime, which occurs at intermediate values of C/2)4/3. At lower driving potentials, in the diffusion regime, the flux scaling is similar to that in turbulent RBC.
Murn, Campbell; Holloway, Graham J
2016-10-01
Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Can we estimate molluscan abundance and biomass on the continental shelf?
NASA Astrophysics Data System (ADS)
Powell, Eric N.; Mann, Roger; Ashton-Alcox, Kathryn A.; Kuykendall, Kelsey M.; Chase Long, M.
2017-11-01
Few empirical studies have focused on the effect of sample density on the estimate of abundance of the dominant carbonate-producing fauna of the continental shelf. Here, we present such a study and consider the implications of suboptimal sampling design on estimates of abundance and size-frequency distribution. We focus on a principal carbonate producer of the U.S. Atlantic continental shelf, the Atlantic surfclam, Spisula solidissima. To evaluate the degree to which the results are typical, we analyze a dataset for the principal carbonate producer of Mid-Atlantic estuaries, the Eastern oyster Crassostrea virginica, obtained from Delaware Bay. These two species occupy different habitats and display different lifestyles, yet demonstrate similar challenges to survey design and similar trends with sampling density. The median of a series of simulated survey mean abundances, the central tendency obtained over a large number of surveys of the same area, always underestimated true abundance at low sample densities. More dramatic were the trends in the probability of a biased outcome. As sample density declined, the probability of a survey availability event, defined as a survey yielding indices >125% or <75% of the true population abundance, increased and that increase was disproportionately biased towards underestimates. For these cases where a single sample accessed about 0.001-0.004% of the domain, 8-15 random samples were required to reduce the probability of a survey availability event below 40%. The problem of differential bias, in which the probabilities of a biased-high and a biased-low survey index were distinctly unequal, was resolved with fewer samples than the problem of overall bias. These trends suggest that the influence of sampling density on survey design comes with a series of incremental challenges. At woefully inadequate sampling density, the probability of a biased-low survey index will substantially exceed the probability of a biased-high index. The survey time series on the average will return an estimate of the stock that underestimates true stock abundance. If sampling intensity is increased, the frequency of biased indices balances between high and low values. Incrementing sample number from this point steadily reduces the likelihood of a biased survey; however, the number of samples necessary to drive the probability of survey availability events to a preferred level of infrequency may be daunting. Moreover, certain size classes will be disproportionately susceptible to such events and the impact on size frequency will be species specific, depending on the relative dispersion of the size classes.
Stochastic modeling of turbulent reacting flows
NASA Technical Reports Server (NTRS)
Fox, R. O.; Hill, J. C.; Gao, F.; Moser, R. D.; Rogers, M. M.
1992-01-01
Direct numerical simulations of a single-step irreversible chemical reaction with non-premixed reactants in forced isotropic turbulence at R(sub lambda) = 63, Da = 4.0, and Sc = 0.7 were made using 128 Fourier modes to obtain joint probability density functions (pdfs) and other statistical information to parameterize and test a Fokker-Planck turbulent mixing model. Preliminary results indicate that the modeled gradient stretching term for an inert scalar is independent of the initial conditions of the scalar field. The conditional pdf of scalar gradient magnitudes is found to be a function of the scalar until the reaction is largely completed. Alignment of concentration gradients with local strain rate and other features of the flow were also investigated.
Automated side-chain model building and sequence assignment by template matching.
Terwilliger, Thomas C
2003-01-01
An algorithm is described for automated building of side chains in an electron-density map once a main-chain model is built and for alignment of the protein sequence to the map. The procedure is based on a comparison of electron density at the expected side-chain positions with electron-density templates. The templates are constructed from average amino-acid side-chain densities in 574 refined protein structures. For each contiguous segment of main chain, a matrix with entries corresponding to an estimate of the probability that each of the 20 amino acids is located at each position of the main-chain model is obtained. The probability that this segment corresponds to each possible alignment with the sequence of the protein is estimated using a Bayesian approach and high-confidence matches are kept. Once side-chain identities are determined, the most probable rotamer for each side chain is built into the model. The automated procedure has been implemented in the RESOLVE software. Combined with automated main-chain model building, the procedure produces a preliminary model suitable for refinement and extension by an experienced crystallographer.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.
NASA Astrophysics Data System (ADS)
Górska, K.; Horzela, A.; Bratek, Ł.; Dattoli, G.; Penson, K. A.
2018-04-01
We study functions related to the experimentally observed Havriliak-Negami dielectric relaxation pattern proportional in the frequency domain to [1+(iωτ0){\\hspace{0pt}}α]-β with τ0 > 0 being some characteristic time. For α = l/k< 1 (l and k being positive and relatively prime integers) and β > 0 we furnish exact and explicit expressions for response and relaxation functions in the time domain and suitable probability densities in their domain dual in the sense of the inverse Laplace transform. All these functions are expressed as finite sums of generalized hypergeometric functions, convenient to handle analytically and numerically. Introducing a reparameterization β = (2-q)/(q-1) and τ0 = (q-1){\\hspace{0pt}}1/α (1 < q < 2) we show that for 0 < α < 1 the response functions fα, β(t/τ0) go to the one-sided Lévy stable distributions when q tends to one. Moreover, applying the self-similarity property of the probability densities gα, β(u) , we introduce two-variable densities and show that they satisfy the integral form of the evolution equation.
Rupert, Michael G.; Plummer, Niel
2009-01-01
This raster data set delineates the predicted probability of unmixed young groundwater (defined using chlorofluorocarbon-11 concentrations and tritium activities) in groundwater in the Eagle River watershed valley-fill aquifer, Eagle County, North-Central Colorado, 2006-2007. This data set was developed by a cooperative project between the U.S. Geological Survey, Eagle County, the Eagle River Water and Sanitation District, the Town of Eagle, the Town of Gypsum, and the Upper Eagle Regional Water Authority. This project was designed to evaluate potential land-development effects on groundwater and surface-water resources so that informed land-use and water management decisions can be made. This groundwater probability map and its associated probability maps were developed as follows: (1) A point data set of wells with groundwater quality and groundwater age data was overlaid with thematic layers of anthropogenic (related to human activities) and hydrogeologic data by using a geographic information system to assign each well values for depth to groundwater, distance to major streams and canals, distance to gypsum beds, precipitation, soils, and well depth. These data then were downloaded to a statistical software package for analysis by logistic regression. (2) Statistical models predicting the probability of elevated nitrate concentrations, the probability of unmixed young water (using chlorofluorocarbon-11 concentrations and tritium activities), and the probability of elevated volatile organic compound concentrations were developed using logistic regression techniques. (3) The statistical models were entered into a GIS and the probability map was constructed.
Spatial averaging of a dissipative particle dynamics model for active suspensions
NASA Astrophysics Data System (ADS)
Panchenko, Alexander; Hinz, Denis F.; Fried, Eliot
2018-03-01
Starting from a fine-scale dissipative particle dynamics (DPD) model of self-motile point particles, we derive meso-scale continuum equations by applying a spatial averaging version of the Irving-Kirkwood-Noll procedure. Since the method does not rely on kinetic theory, the derivation is valid for highly concentrated particle systems. Spatial averaging yields stochastic continuum equations similar to those of Toner and Tu. However, our theory also involves a constitutive equation for the average fluctuation force. According to this equation, both the strength and the probability distribution vary with time and position through the effective mass density. The statistics of the fluctuation force also depend on the fine scale dissipative force equation, the physical temperature, and two additional parameters which characterize fluctuation strengths. Although the self-propulsion force entering our DPD model contains no explicit mechanism for aligning the velocities of neighboring particles, our averaged coarse-scale equations include the commonly encountered cubically nonlinear (internal) body force density.
Lizana, L; Ambjörnsson, T
2009-11-01
We solve a nonequilibrium statistical-mechanics problem exactly, namely, the single-file dynamics of N hard-core interacting particles (the particles cannot pass each other) of size Delta diffusing in a one-dimensional system of finite length L with reflecting boundaries at the ends. We obtain an exact expression for the conditional probability density function rhoT(yT,t|yT,0) that a tagged particle T (T=1,...,N) is at position yT at time t given that it at time t=0 was at position yT,0. Using a Bethe ansatz we obtain the N -particle probability density function and, by integrating out the coordinates (and averaging over initial positions) of all particles but particle T , we arrive at an exact expression for rhoT(yT,t|yT,0) in terms of Jacobi polynomials or hypergeometric functions. Going beyond previous studies, we consider the asymptotic limit of large N , maintaining L finite, using a nonstandard asymptotic technique. We derive an exact expression for rhoT(yT,t|yT,0) for a tagged particle located roughly in the middle of the system, from which we find that there are three time regimes of interest for finite-sized systems: (A) for times much smaller than the collision time t
NASA Astrophysics Data System (ADS)
Wellons, Sarah; Torrey, Paul
2017-06-01
Galaxy populations at different cosmic epochs are often linked by cumulative comoving number density in observational studies. Many theoretical works, however, have shown that the cumulative number densities of tracked galaxy populations not only evolve in bulk, but also spread out over time. We present a method for linking progenitor and descendant galaxy populations which takes both of these effects into account. We define probability distribution functions that capture the evolution and dispersion of galaxy populations in number density space, and use these functions to assign galaxies at redshift zf probabilities of being progenitors/descendants of a galaxy population at another redshift z0. These probabilities are used as weights for calculating distributions of physical progenitor/descendant properties such as stellar mass, star formation rate or velocity dispersion. We demonstrate that this probabilistic method provides more accurate predictions for the evolution of physical properties than the assumption of either a constant number density or an evolving number density in a bin of fixed width by comparing predictions against galaxy populations directly tracked through a cosmological simulation. We find that the constant number density method performs least well at recovering galaxy properties, the evolving method density slightly better and the probabilistic method best of all. The improvement is present for predictions of stellar mass as well as inferred quantities such as star formation rate and velocity dispersion. We demonstrate that this method can also be applied robustly and easily to observational data, and provide a code package for doing so.
Radiative transition of hydrogen-like ions in quantum plasma
NASA Astrophysics Data System (ADS)
Hu, Hongwei; Chen, Zhanbin; Chen, Wencong
2016-12-01
At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Hata, Masahiro; Tanaka, Toshihisa; Kazui, Hiroaki; Ishii, Ryouhei; Canuet, Leonides; Pascual-Marqui, Roberto D; Aoki, Yasunori; Ikeda, Shunichiro; Sato, Shunsuke; Suzuki, Yukiko; Kanemoto, Hideki; Yoshiyama, Kenji; Iwase, Masao
2017-09-01
Recently, cerebrospinal fluid (CSF) biomarkers related to Alzheimer's disease (AD) have garnered a lot of clinical attention. To explore neurophysiological traits of AD and parameters for its clinical diagnosis, we examined the association between CSF biomarkers and electroencephalography (EEG) parameters in 14 probable AD patients. Using exact low-resolution electromagnetic tomography (eLORETA), artifact-free 40-sesond EEG data were estimated with current source density (CSD) and lagged phase synchronization (LPS) as the EEG parameters. Correlations between CSF biomarkers and the EEG parameters were assessed. Patients with AD showed significant negative correlation between CSF beta-amyloid (Aβ)-42 concentration and the logarithms of CSD over the right temporal area in the theta band. Total tau concentration was negatively correlated with the LPS between the left frontal eye field and the right auditory area in the alpha-2 band in patients with AD. Our study results suggest that AD biomarkers, in particular CSF Aβ42 and total tau concentrations are associated with the EEG parameters CSD and LPS, respectively. Our results could yield more insights into the complicated pathology of AD.
Epidemics in interconnected small-world networks.
Liu, Meng; Li, Daqing; Qin, Pengju; Liu, Chaoran; Wang, Huijuan; Wang, Feilong
2015-01-01
Networks can be used to describe the interconnections among individuals, which play an important role in the spread of disease. Although the small-world effect has been found to have a significant impact on epidemics in single networks, the small-world effect on epidemics in interconnected networks has rarely been considered. Here, we study the susceptible-infected-susceptible (SIS) model of epidemic spreading in a system comprising two interconnected small-world networks. We find that the epidemic threshold in such networks decreases when the rewiring probability of the component small-world networks increases. When the infection rate is low, the rewiring probability affects the global steady-state infection density, whereas when the infection rate is high, the infection density is insensitive to the rewiring probability. Moreover, epidemics in interconnected small-world networks are found to spread at different velocities that depend on the rewiring probability.
Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates
Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.
2008-01-01
Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.
Hofe, Carolyn R.; Feng, Limin; Zephyr, Dominique; Stromberg, Arnold J.; Hennig, Bernhard; Gaetke, Lisa M.
2014-01-01
Type 2 diabetes has been shown to occur in response to environmental and genetic influences, among them nutrition, food intake patterns, sedentary lifestyle, body mass index (BMI), and exposure to persistent organic pollutants (POPs), such as polychlorinated biphenyls (PCBs). Nutrition is essential in the prevention and management of type 2 diabetes and has been shown to modulate the toxicity of PCBs. Serum carotenoid concentrations, considered a reliable biomarker of fruit and vegetable intake, are associated with the reduced probability of chronic diseases, such as type 2 diabetes and cardiovascular disease. Our hypothesis is that fruit and vegetable intake, reflected by serum carotenoid concentrations, is associated with the reduced probability of developing type 2 diabetes in US adults with elevated serum concentrations of PCBs 118, 126, and 153. This cross-sectional study utilized the CDC database, National Health and Nutrition Examination Survey (NHANES) 2003–2004 in logistic regression analyses. Overall prevalence of type 2 diabetes was approximately 11.6% depending on the specific PCB. All three PCBs were positively associated with the probability of type 2 diabetes. For participants at higher PCB percentiles (e.g., 75th and 90th) for PCB 118 and 126, increasing serum carotenoid concentrations were associated with a smaller probability of type 2 diabetes. Fruit and vegetable intake, as reflected by serum carotenoid concentrations, predicted notably reduced probability of dioxin-like PCB-associated risk for type 2 diabetes. PMID:24774064
Use of a priori statistics to minimize acquisition time for RFI immune spread spectrum systems
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Woo, K. T.
1978-01-01
The optimum acquisition sweep strategy was determined for a PN code despreader when the a priori probability density function was not uniform. A psuedo noise spread spectrum system was considered which could be utilized in the DSN to combat radio frequency interference. In a sample case, when the a priori probability density function was Gaussian, the acquisition time was reduced by about 41% compared to a uniform sweep approach.
RADC Multi-Dimensional Signal-Processing Research Program.
1980-09-30
Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Energetics and Birth Rates of Supernova Remnants in the Large Magellanic Cloud
NASA Astrophysics Data System (ADS)
Leahy, D. A.
2017-03-01
Published X-ray emission properties for a sample of 50 supernova remnants (SNRs) in the Large Magellanic Cloud (LMC) are used as input for SNR evolution modeling calculations. The forward shock emission is modeled to obtain the initial explosion energy, age, and circumstellar medium density for each SNR in the sample. The resulting age distribution yields a SNR birthrate of 1/(500 yr) for the LMC. The explosion energy distribution is well fit by a log-normal distribution, with a most-probable explosion energy of 0.5× {10}51 erg, with a 1σ dispersion by a factor of 3 in energy. The circumstellar medium density distribution is broader than the explosion energy distribution, with a most-probable density of ˜0.1 cm-3. The shape of the density distribution can be fit with a log-normal distribution, with incompleteness at high density caused by the shorter evolution times of SNRs.
Growth and wood/bark properties of Abies faxoniana seedlings as affected by elevated CO2.
Qiao, Yun-Zhou; Zhang, Yuan-Bin; Wang, Kai-Yun; Wang, Qian; Tian, Qi-Zhuo
2008-03-01
Growth and wood and bark properties of Abies faxoniana seedlings after one year's exposure to elevated CO2 concentration (ambient + 350 (+/- 25) micromol/mol) under two planting densities (28 or 84 plants/m(2)) were investigated in closed-top chambers. Tree height, stem diameter and cross-sectional area, and total biomass were enhanced under elevated CO2 concentration, and reduced under high planting density. Most traits of stem bark were improved under elevated CO2 concentration and reduced under high planting density. Stem wood production was significantly increased in volume under elevated CO2 concentration under both densities, and the stem wood density decreased under elevated CO2 concentration and increased under high planting density. These results suggest that the response of stem wood and bark to elevated CO2 concentration is density dependent. This may be of great importance in a future CO2 enriched world in natural forests where plant density varies considerably. The results also show that the bark/wood ratio in diameter, stem cross-sectional area and dry weight are not proportionally affected by elevated CO2 concentration under the two contrasting planting densities. This indicates that the response magnitude of stem bark and stem wood to elevated CO2 concentration are different but their response directions are the same.
Predictions of malaria vector distribution in Belize based on multispectral satellite data.
Roberts, D R; Paris, J F; Manguin, S; Harbach, R E; Woodruff, R; Rejmankova, E; Polanco, J; Wullschleger, B; Legters, L J
1996-03-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Predictions of malaria vector distribution in Belize based on multispectral satellite data
NASA Technical Reports Server (NTRS)
Roberts, D. R.; Paris, J. F.; Manguin, S.; Harbach, R. E.; Woodruff, R.; Rejmankova, E.; Polanco, J.; Wullschleger, B.; Legters, L. J.
1996-01-01
Use of multispectral satellite data to predict arthropod-borne disease trouble spots is dependent on clear understandings of environmental factors that determine the presence of disease vectors. A blind test of remote sensing-based predictions for the spatial distribution of a malaria vector, Anopheles pseudopunctipennis, was conducted as a follow-up to two years of studies on vector-environmental relationships in Belize. Four of eight sites that were predicted to be high probability locations for presence of An. pseudopunctipennis were positive and all low probability sites (0 of 12) were negative. The absence of An. pseudopunctipennis at four high probability locations probably reflects the low densities that seem to characterize field populations of this species, i.e., the population densities were below the threshold of our sampling effort. Another important malaria vector, An. darlingi, was also present at all high probability sites and absent at all low probability sites. Anopheles darlingi, like An. pseudopunctipennis, is a riverine species. Prior to these collections at ecologically defined locations, this species was last detected in Belize in 1946.
Natal and breeding philopatry in a black brant, Branta bernicla nigricans, metapopulation
Lindberg, Mark S.; Sedinger, James S.; Derksen, Dirk V.; Rockwell, Robert F.
1998-01-01
We estimated natal and breeding philopatry and dispersal probabilities for a metapopulation of Black Brant (Branta bernicla nigricans) based on observations of marked birds at six breeding colonies in Alaska, 1986–1994. Both adult females and males exhibited high (>0.90) probability of philopatry to breeding colonies. Probability of natal philopatry was significantly higher for females than males. Natal dispersal of males was recorded between every pair of colonies, whereas natal dispersal of females was observed between only half of the colony pairs. We suggest that female-biased philopatry was the result of timing of pair formation and characteristics of the mating system of brant, rather than factors related to inbreeding avoidance or optimal discrepancy. Probability of natal philopatry of females increased with age but declined with year of banding. Age-related increase in natal philopatry was positively related to higher breeding probability of older females. Declines in natal philopatry with year of banding corresponded negatively to a period of increasing population density; therefore, local population density may influence the probability of nonbreeding and gene flow among colonies.
NASA Astrophysics Data System (ADS)
Reese, Daniel; Ames, Alex; Noble, Chris; Oakley, Jason; Rothamer, Dave; Bonazza, Riccardo
2016-11-01
The present work investigates the evolution of the Richtmyer-Meshkov instability through simultaneous measurements of concentration and velocity. In the Wisconsin Shock Tube Laboratory at the University of Wisconsin, a broadband, shear-layer initial condition is created at the interface between helium and argon (Atwood number A = 0.7). The helium is seeded with acetone vapor for use in planar laser-induced fluorescence (PLIF), while each gas in the shear layer cross flow is seeded with particulate TiO2, which is used to track the flow and allow for the Mie scattering of light. Once impulsively accelerated by a M = 1.57 shock wave, the interface is imaged twice in close succession using a planar laser sheet containing both the second and fourth harmonic output (532 nm and 266 nm, respectively) of a dual-cavity Nd:YAG laser. Particle image pairs are captured on a dual-frame CCD camera, for use in particle image velocimetry (PIV), while PLIF images are corrected to show concentration. Velocity fields are obtained from particle images using the Insight 4G software package by TSI, and velocity field structure is investigated and compared against concentration images. Probability density functions (PDFs) and planar energy spectra (of both velocity fluctuations and concentration) are then calculated and results are discussed.
Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo
2018-01-01
Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Stochastic transport models for mixing in variable-density turbulence
NASA Astrophysics Data System (ADS)
Bakosi, J.; Ristorcelli, J. R.
2011-11-01
In variable-density (VD) turbulent mixing, where very-different- density materials coexist, the density fluctuations can be an order of magnitude larger than their mean. Density fluctuations are non-negligible in the inertia terms of the Navier-Stokes equation which has both quadratic and cubic nonlinearities. Very different mixing rates of different materials give rise to large differential accelerations and some fundamentally new physics that is not seen in constant-density turbulence. In VD flows material mixing is active in a sense far stronger than that applied in the Boussinesq approximation of buoyantly-driven flows: the mass fraction fluctuations are coupled to each other and to the fluid momentum. Statistical modeling of VD mixing requires accounting for basic constraints that are not important in the small-density-fluctuation passive-scalar-mixing approximation: the unit-sum of mass fractions, bounded sample space, and the highly skewed nature of the probability densities become essential. We derive a transport equation for the joint probability of mass fractions, equivalent to a system of stochastic differential equations, that is consistent with VD mixing in multi-component turbulence and consistently reduces to passive scalar mixing in constant-density flows.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun
2013-01-01
Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
A partial differential equation for pseudocontact shift.
Charnock, G T P; Kuprov, Ilya
2014-10-07
It is demonstrated that pseudocontact shift (PCS), viewed as a scalar or a tensor field in three dimensions, obeys an elliptic partial differential equation with a source term that depends on the Hessian of the unpaired electron probability density. The equation enables straightforward PCS prediction and analysis in systems with delocalized unpaired electrons, particularly for the nuclei located in their immediate vicinity. It is also shown that the probability density of the unpaired electron may be extracted, using a regularization procedure, from PCS data.
Probability density cloud as a geometrical tool to describe statistics of scattered light.
Yaitskova, Natalia
2017-04-01
First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.
NASA Astrophysics Data System (ADS)
Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping
2015-05-01
It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.
The role of demographic compensation theory in incidental take assessments for endangered species
McGowan, Conor P.; Ryan, Mark R.; Runge, Michael C.; Millspaugh, Joshua J.; Cochrane, Jean Fitts
2011-01-01
Many endangered species laws provide exceptions to legislated prohibitions through incidental take provisions as long as take is the result of unintended consequences of an otherwise legal activity. These allowances presumably invoke the theory of demographic compensation, commonly applied to harvested species, by allowing limited harm as long as the probability of the species' survival or recovery is not reduced appreciably. Demographic compensation requires some density-dependent limits on survival or reproduction in a species' annual cycle that can be alleviated through incidental take. Using a population model for piping plovers in the Great Plains, we found that when the population is in rapid decline or when there is no density dependence, the probability of quasi-extinction increased linearly with increasing take. However, when the population is near stability and subject to density-dependent survival, there was no relationship between quasi-extinction probability and take rates. We note however, that a brief examination of piping plover demography and annual cycles suggests little room for compensatory capacity. We argue that a population's capacity for demographic compensation of incidental take should be evaluated when considering incidental allowances because compensation is the only mechanism whereby a population can absorb the negative effects of take without incurring a reduction in the probability of survival in the wild. With many endangered species there is probably little known about density dependence and compensatory capacity. Under these circumstances, using multiple system models (with and without compensation) to predict the population's response to incidental take and implementing follow-up monitoring to assess species response may be valuable in increasing knowledge and improving future decision making.
Li, Fei; Huang, Jinhui; Zeng, Guangming; Huang, Xiaolong; Liu, Wenchu; Wu, Haipeng; Yuan, Yujie; He, Xiaoxiao; Lai, Mingyong
2015-05-01
Spatial characteristics of the properties (dust organic material and pH), concentrations, and enrichment levels of toxic metals (Ni, Hg, Mn and As) in street dust from Xiandao District (Middle China) were investigated. Method of incorporating receptor population density into noncarcinogenic health risk assessment based on local land use map and geostatistics was developed to identify their priority pollutants/regions of concern. Mean enrichment factors of studied metals decreased in the order of Hg ≈ As > Mn > Ni. For noncarcinogenic effects, the exposure pathway which resulted in the highest levels of exposure risk for children and adults was ingestion except Hg (inhalation of vapors), followed by dermal contact and inhalation. Hazard indexes (HIs) for As, Hg, Mn, and Ni to children and adults revealed the following order: As > Hg > Mn > Ni. Mean HI for As exceeded safe level (1) for children, and the maximum HI (0.99) for Hg was most approached the safe level. Priority regions of concern were indentified in A region at each residential population density and the areas of B at high and moderate residential population density for As and the high residential density area within A region for Hg, respectively. The developed method was proved useful due to its improvement on previous study for making the priority areas of environmental management spatially hierarchical and thus reducing the probability of excessive environmental management.
Oregon Cascades Play Fairway Analysis: Faults and Heat Flow maps
Adam Brandt
2015-11-15
This submission includes a fault map of the Oregon Cascades and backarc, a probability map of heat flow, and a fault density probability layer. More extensive metadata can be found within each zip file.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Information Density and Syntactic Repetition.
Temperley, David; Gildea, Daniel
2015-11-01
In noun phrase (NP) coordinate constructions (e.g., NP and NP), there is a strong tendency for the syntactic structure of the second conjunct to match that of the first; the second conjunct in such constructions is therefore low in syntactic information. The theory of uniform information density predicts that low-information syntactic constructions will be counterbalanced by high information in other aspects of that part of the sentence, and high-information constructions will be counterbalanced by other low-information components. Three predictions follow: (a) lexical probabilities (measured by N-gram probabilities and head-dependent probabilities) will be lower in second conjuncts than first conjuncts; (b) lexical probabilities will be lower in matching second conjuncts (those whose syntactic expansions match the first conjunct) than nonmatching ones; and (c) syntactic repetition should be especially common for low-frequency NP expansions. Corpus analysis provides support for all three of these predictions. Copyright © 2015 Cognitive Science Society, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastner, S.O.; Bhatia, A.K.
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less
Effect of particle surface area on ice active site densities retrieved from droplet freezing spectra
NASA Astrophysics Data System (ADS)
Beydoun, Hassan; Polen, Michael; Sullivan, Ryan C.
2016-10-01
Heterogeneous ice nucleation remains one of the outstanding problems in cloud physics and atmospheric science. Experimental challenges in properly simulating particle-induced freezing processes under atmospherically relevant conditions have largely contributed to the absence of a well-established parameterization of immersion freezing properties. Here, we formulate an ice active, surface-site-based stochastic model of heterogeneous freezing with the unique feature of invoking a continuum assumption on the ice nucleating activity (contact angle) of an aerosol particle's surface that requires no assumptions about the size or number of active sites. The result is a particle-specific property g that defines a distribution of local ice nucleation rates. Upon integration, this yields a full freezing probability function for an ice nucleating particle. Current cold plate droplet freezing measurements provide a valuable and inexpensive resource for studying the freezing properties of many atmospheric aerosol systems. We apply our g framework to explain the observed dependence of the freezing temperature of droplets in a cold plate on the concentration of the particle species investigated. Normalizing to the total particle mass or surface area present to derive the commonly used ice nuclei active surface (INAS) density (ns) often cannot account for the effects of particle concentration, yet concentration is typically varied to span a wider measurable freezing temperature range. A method based on determining what is denoted an ice nucleating species' specific critical surface area is presented and explains the concentration dependence as a result of increasing the variability in ice nucleating active sites between droplets. By applying this method to experimental droplet freezing data from four different systems, we demonstrate its ability to interpret immersion freezing temperature spectra of droplets containing variable particle concentrations. It is shown that general active site density functions, such as the popular ns parameterization, cannot be reliably extrapolated below this critical surface area threshold to describe freezing curves for lower particle surface area concentrations. Freezing curves obtained below this threshold translate to higher ns values, while the ns values are essentially the same from curves obtained above the critical area threshold; ns should remain the same for a system as concentration is varied. However, we can successfully predict the lower concentration freezing curves, which are more atmospherically relevant, through a process of random sampling from g distributions obtained from high particle concentration data. Our analysis is applied to cold plate freezing measurements of droplets containing variable concentrations of particles from NX illite minerals, MCC cellulose, and commercial Snomax bacterial particles. Parameterizations that can predict the temporal evolution of the frozen fraction of cloud droplets in larger atmospheric models are also derived from this new framework.
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
The risks and returns of stock investment in a financial market
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2013-03-01
The risks and returns of stock investment are discussed via numerically simulating the mean escape time and the probability density function of stock price returns in the modified Heston model with time delay. Through analyzing the effects of delay time and initial position on the risks and returns of stock investment, the results indicate that: (i) There is an optimal delay time matching minimal risks of stock investment, maximal average stock price returns and strongest stability of stock price returns for strong elasticity of demand of stocks (EDS), but the opposite results for weak EDS; (ii) The increment of initial position recedes the risks of stock investment, strengthens the average stock price returns and enhances stability of stock price returns. Finally, the probability density function of stock price returns and the probability density function of volatility and the correlation function of stock price returns are compared with other literatures. In addition, good agreements are found between them.
NASA Astrophysics Data System (ADS)
Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.
2018-07-01
The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yumin; Lum, Kai-Yew; Wang Qingguo
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less
NASA Astrophysics Data System (ADS)
Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew
2009-03-01
In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.
A comparative study of nonparametric methods for pattern recognition
NASA Technical Reports Server (NTRS)
Hahn, S. F.; Nelson, G. D.
1972-01-01
The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.
Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.
Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,
2014-01-01
Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.
NASA Technical Reports Server (NTRS)
Garber, Donald P.
1993-01-01
A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.
NASA Astrophysics Data System (ADS)
Stephanik, Brian Michael
This dissertation describes the results of two related investigations into introductory student understanding of ideas from classical physics that are key elements of quantum mechanics. One investigation probes the extent to which students are able to interpret and apply potential energy diagrams (i.e., graphs of potential energy versus position). The other probes the extent to which students are able to reason classically about probability and spatial probability density. The results of these investigations revealed significant conceptual and reasoning difficulties that students encounter with these topics. The findings guided the design of instructional materials to address the major problems. Results from post-instructional assessments are presented that illustrate the impact of the curricula on student learning.
Analytical approach to an integrate-and-fire model with spike-triggered adaptation
NASA Astrophysics Data System (ADS)
Schwalger, Tilo; Lindner, Benjamin
2015-12-01
The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.
Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target
Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji
2009-01-01
In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326
NASA Astrophysics Data System (ADS)
Atlabachew, Abunu; Shu, Longcang; Wu, Peipeng; Zhang, Yongjie; Xu, Yang
2018-03-01
This laboratory study improves the understanding of the impacts of horizontal hydraulic gradient, artificial recharge, and groundwater pumping on solute transport through aquifers. Nine experiments and numerical simulations were carried out using a sand tank. The variable-density groundwater flow and sodium chloride transport were simulated using the three-dimensional numerical model SEAWAT. Numerical modelling results successfully reproduced heads and concentrations observed in the sand tank. A higher horizontal hydraulic gradient enhanced the migration of sodium chloride, particularly in the groundwater flow direction. The application of constant artificial recharge increased the spread of the sodium chloride plume in both the longitudinal and lateral directions. In addition, groundwater pumping accelerated spreading of the sodium chloride plume towards the pumping well. Both higher hydraulic gradient and pumping rate generated oval-shaped plumes in the horizontal plane. However, the artificial recharge process produced stretched plumes. These effects of artificial recharge and groundwater pumping were greater under higher hydraulic gradient. The concentration breakthrough curves indicated that emerging solutions never attained the concentration of the originally injected solution. This is probably because of sorption of sodium chloride onto the silica sand and/or the exchange of sodium chloride between the mobile and immobile liquid domains. The fingering and protruding plume shapes in the numerical models constitute instability zones produced by buoyancy-driven flow. Overall, the results have substantiated the influences of hydraulic gradient, boundary condition, artificial recharge, pumping rate and density differences on solute transport through a homogeneous unconfined aquifer. The implications of these findings are important for managing liquid wastes.
NASA Astrophysics Data System (ADS)
Cunha, Davi Gasparini Fernandes; Benassi, Simone Frederigi; de Falco, Patrícia Bortoletto; do Carmo Calijuri, Maria
2016-03-01
Artificial reservoirs have been used for drinking water supply, other human activities, flood control and pollution abatement worldwide, providing overall benefits to downstream water quality. Most reservoirs in Brazil were built during the 1970s, but their long-term patterns of trophic status, water chemistry, and nutrient removal are still not very well characterized. We aimed to evaluate water quality time series (1985-2010) data from the riverine and lacustrine zones of the transboundary Itaipu Reservoir (Brazil/Paraguay). We examined total phosphorus and nitrogen, chlorophyll a concentrations, water transparency, and phytoplankton density to look for spatial and temporal trends and correlations with trophic state evolution and nutrient retention. There was significant temporal and spatial water quality variation ( P < 0.01, ANCOVA). The results indicated that the water quality and structure of the reservoir were mainly affected by one internal force (hydrodynamics) and one external force (upstream cascading reservoirs). Nutrient and chlorophyll a concentrations tended to be lower in the lacustrine zone and decreased over the 25-year timeframe. Reservoir operational features seemed to be limiting primary production and phytoplankton development, which exhibited a maximum density of 6050 org/mL. The relatively small nutrient concentrations in the riverine zone were probably related to the effect of the cascade reservoirs upstream of Itaipu and led to relatively low removal percentages. Our study suggested that water quality problems may be more pronounced immediately after the filling phase of the artificial reservoirs, associated with the initial decomposition of drowned vegetation at the very beginning of reservoir operation.
Cunha, Davi Gasparini Fernandes; Benassi, Simone Frederigi; de Falco, Patrícia Bortoletto; Calijuri, Maria do Carmo
2016-03-01
Artificial reservoirs have been used for drinking water supply, other human activities, flood control and pollution abatement worldwide, providing overall benefits to downstream water quality. Most reservoirs in Brazil were built during the 1970s, but their long-term patterns of trophic status, water chemistry, and nutrient removal are still not very well characterized. We aimed to evaluate water quality time series (1985-2010) data from the riverine and lacustrine zones of the transboundary Itaipu Reservoir (Brazil/Paraguay). We examined total phosphorus and nitrogen, chlorophyll a concentrations, water transparency, and phytoplankton density to look for spatial and temporal trends and correlations with trophic state evolution and nutrient retention. There was significant temporal and spatial water quality variation (P < 0.01, ANCOVA). The results indicated that the water quality and structure of the reservoir were mainly affected by one internal force (hydrodynamics) and one external force (upstream cascading reservoirs). Nutrient and chlorophyll a concentrations tended to be lower in the lacustrine zone and decreased over the 25-year timeframe. Reservoir operational features seemed to be limiting primary production and phytoplankton development, which exhibited a maximum density of 6050 org/mL. The relatively small nutrient concentrations in the riverine zone were probably related to the effect of the cascade reservoirs upstream of Itaipu and led to relatively low removal percentages. Our study suggested that water quality problems may be more pronounced immediately after the filling phase of the artificial reservoirs, associated with the initial decomposition of drowned vegetation at the very beginning of reservoir operation.
Liu, Xingmei; Wu, Jianjun; Xu, Jianming
2006-05-01
For many practical problems in environmental management, information about soil heavy metals, relative to threshold values that may be of practical importance is needed at unsampled sites. The Hangzhou-Jiaxing-Huzhou (HJH) Plain has always been one of the most important rice production areas in Zhejiang province, China, and the soil heavy metal concentration is directly related to the crop quality and ultimately the health of people. Four hundred and fifty soil samples were selected in topsoil in HJH Plain to characterize the spatial variability of Cu, Zn, Pb, Cr and Cd. Ordinary kriging and lognormal kriging were carried out to map the spatial patterns of heavy metals and disjunctive kriging was used to quantify the probability of heavy metal concentrations higher than their guide value. Cokriging method was used to minimize the sampling density for Cu, Zn and Cr. The results of this study could give insight into risk assessment of environmental pollution and decision-making for agriculture.
Dense colloidal mixtures in an external sinusoidal potential
NASA Astrophysics Data System (ADS)
Capellmann, R. F.; Khisameeva, A.; Platten, F.; Egelhaaf, S. U.
2018-03-01
Concentrated binary colloidal mixtures containing particles with a size ratio 1:2.4 were exposed to a periodic potential that was realized using a light field, namely, two crossed laser beams creating a fringe pattern. The arrangement of the particles was recorded using optical microscopy and characterized in terms of the pair distribution function along the minima, the occupation probability perpendicular to the minima, the angular bond distribution, and the average potential energy per particle. The particle arrangement was investigated in dependence of the importance of particle-potential and particle-particle interactions by changing the potential amplitude and particle concentration, respectively. An increase in the potential amplitude leads to a stronger localization, especially of the large particles, but also results in an increasing fraction of small particles being located closer to the potential maxima, which also occurs upon increasing the particle density. Furthermore, increasing the potential amplitude induces a local demixing of the two particle species, whereas an increase in the total packing fraction favors a more homogeneous arrangement.
NASA Astrophysics Data System (ADS)
Carbone, F.; Bruno, A. G.; Naccarato, A.; De Simone, F.; Gencarelli, C. N.; Sprovieri, F.; Hedgecock, I. M.; Landis, M. S.; Skov, H.; Pfaffhuber, K. A.; Read, K. A.; Martin, L.; Angot, H.; Dommergue, A.; Magand, O.; Pirrone, N.
2018-01-01
The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed threshold Q in the data, the PDFs of the interoccurrence time of the Hg0 data are well described by a Tsallis q-exponential function. This PDF behavior has been explained in the framework of superstatistics, where the competition between multiple mesoscopic processes affects the macroscopic dynamics. An extensive parameter μ, encompassing all possible fluctuations related to mesoscopic phenomena, has been identified. It follows a χ2 distribution, indicative of the superstatistical nature of the overall process. Shuffling the data series destroys the long-term memory, the distributions become independent of Q, and the PDFs collapse on to the same exponential distribution. The possible central role of atmospheric turbulence on extreme events in the Hg0 data is highlighted.
Modeling of Yb3+/Er3+-codoped microring resonators
NASA Astrophysics Data System (ADS)
Vallés, Juan A.; Gălătuş, Ramona
2015-03-01
The performance of a highly Yb3+/Er3+-codoped phosphate glass add-drop microring resonator is numerically analyzed. The model assumes resonant behaviour of both pump and signal powers and the dependences of pump intensity build-up inside the microring resonator and of the signal transfer functions to the device through and drop ports are evaluated. Detailed equations for the evolution of the rare-earth ions levels population densities and the propagation of the optical powers inside the microring resonator are included in the model. Moreover, due to the high dopant concentrations considered, the microscopic statistical formalism based on the statistical average of the excitation probability of the Er3+ ion in a microscopic level has been used to describe energy-transfer inter-atomic mechanisms. Realistic parameters and working conditions are used for the calculations. Requirements to achieve amplification and laser oscillation within these devices are obtainable as a function of rare earth ions concentration and coupling losses.
Interaction of cw CO2 laser radiation with plasma near-metallic substrate surface
NASA Astrophysics Data System (ADS)
Azharonok, V. V.; Astapchik, S. A.; Zabelin, Alexandre M.; Golubev, Vladimir S.; Golubev, V. S.; Grezev, A. N.; Filatov, Igor V.; Chubrik, N. I.; Shimanovich, V. D.
2000-07-01
Optical and spectroscopic methods were used in studying near-surface plasma that is formed under the effect CW CO2 laser of (2- 5)x106W/cm2 power density upon stainless steel in He and Ar shielding gases. The variation of plume spatial structure with time has been studied, the outflow of gas-vapor jets from the interaction area has been characterized. The spectra of plasma plume pulsations have been obtained for the frequency range Δf = 0-1 MHz. The temperature and electron concentration of plasma plume have been found under radiation effect upon the target of stainless steel. Consideration has been given to the most probable mechanisms of CW laser radiation-metal non-stationary interaction.
The formation conditions of chondrules and chondrites
Alexander, C.M. O'D.; Grossman, Jeffrey N.; Ebel, D.S.; Ciesla, F.J.
2008-01-01
Chondrules, which are roughly millimeter-sized silicate-rich spherules, dominate the most primitive meteorites, the chondrites. They formed as molten droplets and, judging from their abundances in chondrites, are the products of one of the most energetic processes that operated in the early inner solar system. The conditions and mechanism of chondrule formation remain poorly understood. Here we show that the abundance of the volatile element sodium remained relatively constant during chondrule formation. Prevention of the evaporation of sodium requires that chondrules formed in regions with much higher solid densities than predicted by known nebular concentration mechanisms. These regions would probably have been self-gravitating. Our model explains many other chemical characteristics of chondrules and also implies that chondrule and planetesimal formation were linked.
Lin, Bing-Chen; Chen, Kuo-Ju; Wang, Chao-Hsun; Chiu, Ching-Hsueh; Lan, Yu-Pin; Lin, Chien-Chung; Lee, Po-Tsung; Shih, Min-Hsiung; Kuo, Yen-Kuang; Kuo, Hao-Chung
2014-01-13
A tapered AlGaN electron blocking layer with step-graded aluminum composition is analyzed in nitride-based blue light-emitting diode (LED) numerically and experimentally. The energy band diagrams, electrostatic fields, carrier concentration, electron current density profiles, and hole transmitting probability are investigated. The simulation results demonstrated that such tapered structure can effectively enhance the hole injection efficiency as well as the electron confinement. Consequently, the LED with a tapered EBL grown by metal-organic chemical vapor deposition exhibits reduced efficiency droop behavior of 29% as compared with 44% for original LED, which reflects the improvement in hole injection and electron overflow in our design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goretta, K.C.; Brandel, B.P.; Lanagan, M.T.
Nanophase TiO{sub 2} and Al{sub 2}O{sub 3} powders were synthesized by a vapor-phase process and mechanically mixed with stoichiometric YBa{sub 2}Cu{sub 3}O{sub x} and TlBa{sub 2}Ca{sub 2}Cu{sub 3}O{sub x} powders in 20 mole % concentrations. Pellets produced from powders with and without nanophase oxides were heated in air or O{sub 2} above the peritectic melt temperature and slow-cooled. At 4.2 K, the intragranular critical current density (J{sub c}) increased dramatically with the oxide additions. At 35--50 K, effects of the oxide additions were positive, but less pronounced. At 77 K, the additions decreased J{sub c}, probably because of inducing amore » depresion of the transition temperature.« less
NASA Technical Reports Server (NTRS)
Foy, E.; Ronan, G.; Chinitz, W.
1982-01-01
A principal element to be derived from modeling turbulent reacting flows is an expression for the reaction rates of the various species involved in any particular combustion process under consideration. A temperature-derived most-likely probability density function (pdf) was used to describe the effects of temperature fluctuations on the Arrhenius reaction rate constant. A most-likely bivariate pdf described the effects of temperature and species concentrations fluctuations on the reaction rate. A criterion is developed for the use of an "appropriate" temperature pdf. The formulation of models to calculate the mean turbulent Arrhenius reaction rate constant and the mean turbulent reaction rate is considered and the results of calculations using these models are presented.
Estimating The Probability Of Achieving Shortleaf Pine Regeneration At Variable Specified Levels
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2002-01-01
A model was developed that can be used to estimate the probability of achieving regeneration at a variety of specified stem density levels. The model was fitted to shortleaf pine (Pinus echinata Mill.) regeneration data, and can be used to estimate the probability of achieving desired levels of regeneration between 300 and 700 stems per acre 9-l 0...
Properties of Traffic Risk Coefficient
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Huang, Hai-Jun; Shang, Hua-Yan; Xue, Yu
2009-10-01
We use the model with the consideration of the traffic interruption probability (Physica A 387(2008)6845) to study the relationship between the traffic risk coefficient and the traffic interruption probability. The analytical and numerical results show that the traffic interruption probability will reduce the traffic risk coefficient and that the reduction is related to the density, which shows that this model can improve traffic security.
Probability mass first flush evaluation for combined sewer discharges.
Park, Inhyeok; Kim, Hongmyeong; Chae, Soo-Kwon; Ha, Sungryong
2010-01-01
The Korea government has put in a lot of effort to construct sanitation facilities for controlling non-point source pollution. The first flush phenomenon is a prime example of such pollution. However, to date, several serious problems have arisen in the operation and treatment effectiveness of these facilities due to unsuitable design flow volumes and pollution loads. It is difficult to assess the optimal flow volume and pollution mass when considering both monetary and temporal limitations. The objective of this article was to characterize the discharge of storm runoff pollution from urban catchments in Korea and to estimate the probability of mass first flush (MFFn) using the storm water management model and probability density functions. As a result of the review of gauged storms for the representative using probability density function with rainfall volumes during the last two years, all the gauged storms were found to be valid representative precipitation. Both the observed MFFn and probability MFFn in BE-1 denoted similarly large magnitudes of first flush with roughly 40% of the total pollution mass contained in the first 20% of the runoff. In the case of BE-2, however, there were significant difference between the observed MFFn and probability MFFn.
NASA Astrophysics Data System (ADS)
Sasaki, K.; Kikuchi, S.
2014-10-01
In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.
Statistical analysis of dislocations and dislocation boundaries from EBSD data.
Moussa, C; Bernacki, M; Besnard, R; Bozzolo, N
2017-08-01
Electron BackScatter Diffraction (EBSD) is often used for semi-quantitative analysis of dislocations in metals. In general, disorientation is used to assess Geometrically Necessary Dislocations (GNDs) densities. In the present paper, we demonstrate that the use of disorientation can lead to inaccurate results. For example, using the disorientation leads to different GND density in recrystallized grains which cannot be physically justified. The use of disorientation gradients allows accounting for measurement noise and leads to more accurate results. Misorientation gradient is then used to analyze dislocations boundaries following the same principle applied on TEM data before. In previous papers, dislocations boundaries were defined as Geometrically Necessary Boundaries (GNBs) and Incidental Dislocation Boundaries (IDBs). It has been demonstrated in the past, through transmission electron microscopy data, that the probability density distribution of the disorientation of IDBs and GNBs can be described with a linear combination of two Rayleigh functions. Such function can also describe the probability density of disorientation gradient obtained through EBSD data as reported in this paper. This opens the route for determining IDBs and GNBs probability density distribution functions separately from EBSD data, with an increased statistical relevance as compared to TEM data. The method is applied on deformed Tantalum where grains exhibit dislocation boundaries, as observed using electron channeling contrast imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dioguardi, Fabio; Dellino, Pierfrancesco
2017-04-01
Dilute pyroclastic density currents (DPDC) are ground-hugging turbulent gas-particle flows that move down volcano slopes under the combined action of density contrast and gravity. DPDCs are dangerous for human lives and infrastructures both because they exert a dynamic pressure in their direction of motion and transport volcanic ash particles, which remain in the atmosphere during the waning stage and after the passage of a DPDC. Deposits formed by the passage of a DPDC show peculiar characteristics that can be linked to flow field variables with sedimentological models. Here we present PYFLOW_2.0, a significantly improved version of the code of Dioguardi and Dellino (2014) that was already extensively used for the hazard assessment of DPDCs at Campi Flegrei and Vesuvius (Italy). In the latest new version the code structure, the computation times and the data input method have been updated and improved. A set of shape-dependent drag laws have been implemented as to better estimate the aerodynamic drag of particles transported and deposited by the flow. A depositional model for calculating the deposition time and rate of the ash and lapilli layer formed by the pyroclastic flow has also been included. This model links deposit (e.g. componentry, grainsize) to flow characteristics (e.g. flow average density and shear velocity), the latter either calculated by the code itself or given in input by the user. The deposition rate is calculated by summing the contributions of each grainsize class of all components constituting the deposit (e.g. juvenile particles, crystals, etc.), which are in turn computed as a function of particle density, terminal velocity, concentration and deposition probability. Here we apply the concept of deposition probability, previously introduced for estimating the deposition rates of turbidity currents (Stow and Bowen, 1980), to DPDCs, although with a different approach, i.e. starting from what is observed in the deposit (e.g. the weight fractions ratios between the different grainsize classes). In this way, more realistic estimates of the deposition rate can be obtained, as the deposition probability of different grainsize constituting the DPDC deposit could be different and not necessarily equal to unity. Calculations of the deposition rates of large-scale experiments, previously computed with different methods, have been performed as experimental validation and are presented. Results of model application to DPDCs and turbidity currents will also be presented. Dioguardi, F, and P. Dellino (2014), PYFLOW: A computer code for the calculation of the impact parameters of Dilute Pyroclastic Density Currents (DPDC) based on field data, Powder Technol., 66, 200-210, doi:10.1016/j.cageo.2014.01.013 Stow, D. A. V., and A. J. Bowen (1980), A physical model for the transport and sorting of fine-grained sediment by turbidity currents, Sedimentology, 27, 31-46
Hejtmanek, Michael R; Harvey, Tracy D; Bernards, Christopher M
2011-01-01
To minimize the frequency that intrathecal pumps require refilling, drugs are custom compounded at very high concentrations. Unfortunately, the baricity of these custom solutions is unknown, which is problematic, given baricity's importance in determining the spread of intrathecally administered drugs. Consequently, we measured the density and calculated the baricity of clinically relevant concentrations of multiple drugs used for intrathecal infusion. Morphine, clonidine, bupivacaine, and baclofen were weighed to within 0.0001 g and diluted in volumetric flasks to produce solutions of known concentrations (morphine 1, 10, 25, and 50 mg/mL; clonidine 0.05, 0.5, 1, and 3 mg/mL; bupivacaine 2.5, 5, 10, and 20 mg/mL; baclofen 1, 1.5, 2, and 4 mg/mL). The densities of the solutions were measured at 37°C using the mechanical oscillation method. A "best-fit" curve was calculated for plots of concentration versus density for each drug. All prepared solutions of clonidine and baclofen were hypobaric. Higher concentrations of morphine and bupivacaine were hyperbaric, whereas lower concentrations were hypobaric. The relationship between concentration and density is linear for morphine (r > 0.99) and bupivacaine (r > 0.99) and logarithmic for baclofen (r = 0.96) and clonidine (r = 0.98). This is the first study to examine the relationship between concentration and density for custom drug concentrations commonly used in implanted intrathecal pumps. We calculated an equation that defines the relationship between concentration and density for each drug. Using these equations, clinicians can calculate the density of any solution made from the drugs studied here.
NASA Astrophysics Data System (ADS)
Tremblin, P.; Schneider, N.; Minier, V.; Didelon, P.; Hill, T.; Anderson, L. D.; Motte, F.; Zavagno, A.; André, Ph.; Arzoumanian, D.; Audit, E.; Benedettini, M.; Bontemps, S.; Csengeri, T.; Di Francesco, J.; Giannini, T.; Hennemann, M.; Nguyen Luong, Q.; Marston, A. P.; Peretto, N.; Rivera-Ingraham, A.; Russeil, D.; Rygl, K. L. J.; Spinoglio, L.; White, G. J.
2014-04-01
Aims: Ionization feedback should impact the probability distribution function (PDF) of the column density of cold dust around the ionized gas. We aim to quantify this effect and discuss its potential link to the core and initial mass function (CMF/IMF). Methods: We used Herschel column density maps of several regions observed within the HOBYS key program in a systematic way: M 16, the Rosette and Vela C molecular clouds, and the RCW 120 H ii region. We computed the PDFs in concentric disks around the main ionizing sources, determined their properties, and discuss the effect of ionization pressure on the distribution of the column density. Results: We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a "double-peak" or an enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas, while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. Such a double peak is not visible for all clouds associated with ionization fronts, but it depends on the relative importance of ionization pressure and turbulent ram pressure. A power-law tail is present for higher column densities, which are generally ascribed to the effect of gravity. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion that is able to disentangle triggered star formation from pre-existing star formation. Conclusions: In the context of the gravo-turbulent scenario for the origin of the CMF/IMF, the double-peaked or enlarged shape of the PDF may affect the formation of objects at both the low-mass and the high-mass ends of the CMF/IMF. In particular, a broader PDF is required by the gravo-turbulent scenario to fit the IMF properly with a reasonable initial Mach number for the molecular cloud. Since other physical processes (e.g., the equation of state and the variations among the core properties) have already been said to broaden the PDF, the relative importance of the different effects remains an open question. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Ding, R.; Stangeby, P. C.
The three-dimensional Monte Carlo code ERO has been used to simulate dedicated DIII-D experiments in which Mo and W samples with different sizes were exposed to controlled and well-diagnosed divertor plasma conditions to measure the gross and net erosion rates. Experimentally, the net erosion rate is significantly reduced due to the high local redeposition probability of eroded high-Z materials, which according to the modelling is mainly controlled by the electric field and plasma density within the Chodura sheath. Similar redeposition ratios were obtained from ERO modelling with three different sheath models for small angles between the magnetic field and themore » material surface, mainly because of their similar mean ionization lengths. The modelled redeposition ratios are close to the measured value. Decreasing the potential drop across the sheath can suppress both gross and net erosion because sputtering yield is decreased due to lower incident energy while the redeposition ratio is not reduced owing to the higher electron density in the Chodura sheath. Taking into account material mixing in the ERO surface model, the net erosion rate of high-Z materials is shown to be strongly dependent on the carbon impurity concentration in the background plasma; higher carbon concentration can suppress net erosion. As a result, the principal experimental results such as net erosion rate and profile and redeposition ratio are well reproduced by the ERO simulations.« less
Grey Tienshan Urumqi Glacier No.1 and light-absorbing impurities.
Ming, Jing; Xiao, Cunde; Wang, Feiteng; Li, Zhongqin; Li, Yamin
2016-05-01
The Tienshan Urumqi Glacier No.1 (TUG1) usually shows "grey" surfaces in summers. Besides known regional warming, what should be responsible for largely reducing its surface albedo and making it look "grey"? A field campaign was conducted on the TUG1 on a selected cloud-free day of 2013 after a snow fall at night. Fresh and aged snow samples were collected in the field, and snow densities, grain sizes, and spectral reflectances were measured. Light-absorbing impurities (LAIs) including black carbon (BC) and dust, and number concentrations and sizes of the insoluble particles (IPs) in the samples were measured in the laboratory. High temperatures in summer probably enhanced the snow ageing. During the snow ageing process, the snow density varied from 243 to 458 kg m(-3), associated with the snow grain size varying from 290 to 2500 μm. The concentrations of LAIs in aged snow were significantly higher than those in fresh snow. Dust and BC varied from 16 ppm and 25 ppb in fresh snow to 1507 ppm and 1738 ppb in aged snow, respectively. Large albedo difference between the fresh and aged snow suggests a consequent forcing of 180 W m(-2). Simulations under scenarios show that snow ageing, BC, and dust were responsible for 44, 25, and 7 % of the albedo reduction in the accumulation zone, respectively.
Lei, Youming; Zheng, Fan
2016-12-01
Stochastic chaos induced by diffusion processes, with identical spectral density but different probability density functions (PDFs), is investigated in selected lightly damped Hamiltonian systems. The threshold amplitude of diffusion processes for the onset of chaos is derived by using the stochastic Melnikov method together with a mean-square criterion. Two quasi-Hamiltonian systems, namely, a damped single pendulum and damped Duffing oscillator perturbed by stochastic excitations, are used as illustrative examples. Four different cases of stochastic processes are taking as the driving excitations. It is shown that in such two systems the spectral density of diffusion processes completely determines the threshold amplitude for chaos, regardless of the shape of their PDFs, Gaussian or otherwise. Furthermore, the mean top Lyapunov exponent is employed to verify analytical results. The results obtained by numerical simulations are in accordance with the analytical results. This demonstrates that the stochastic Melnikov method is effective in predicting the onset of chaos in the quasi-Hamiltonian systems.
Nakamura, Yoshihiro; Hasegawa, Osamu
2017-01-01
With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.
Self-Supervised Dynamical Systems
NASA Technical Reports Server (NTRS)
Zak, Michail
2003-01-01
Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and metal aspects of a monad is implemented by feedback from mental to motor dynamics, as represented by the aforementioned fictitious forces. This feedback is what makes the evolution of probability densities nonlinear. The deviation from linear evolution can be characterized, in a sense, as an expression of free will. It has been demonstrated that probability densities can approach prescribed attractors while exhibiting such patterns as shock waves, solitons, and chaos in probability space. The concept of self-supervised dynamical systems has been considered for application to diverse phenomena, including information-based neural networks, cooperation, competition, deception, games, and control of chaos. In addition, a formal similarity between the mathematical structures of self-supervised dynamical systems and of quantum-mechanical systems has been investigated.
Logistic model of nitrate in streams of the upper-midwestern United States
Mueller, D.K.; Ruddy, B.C.; Battaglin, W.A.
1997-01-01
Nitrate in surface water can have adverse effects on aquatic life and, in drinking-water supplies, can be a risk to human health. As part of a regional study, nitrates as N (NO3-N) was analyzed in water samples collected from streams throughout 10 Midwestern states during synoptic surveys in 1989, 1990, and 1994. Data from the period immediately following crop planting at 124 sites were analyzed during logistic regression to relate discrete categories of NO3-N concentrations to characteristics of the basins upstream from the sites. The NO3-N data were divided into three categories representing probable background concentrations (10 mg L-1). Nitrate-N concentrations were positively correlated to streamflow, upstream area planted in corn (Zea mays L.), and upstream N- fertilizers application rates. Elevated NO3-N concentrations were associated with poorly drained soils and were weakly correlated with population density. Nitrate-N and streamflow data collected during 1989 and 1990 were used to calibrate the model, and data collected during 1994 were used for verification. The model correctly estimated NO3-N concentration categories for 79% of the samples in the calibration data set and 60% of the samples in the verification data set. The model was used to indicate where NO3-N concentrations might be elevated or exceed the NO3-N MCL in streams throughout the study area. The potential for elevated NO3-N concentrations was predicted to be greatest for streams in Illinois, Indiana, Iowa, and western Ohio.
Tzoulaki, Ioanna; Zgaga, Lina; Ioannidis, John P A
2014-01-01
Objective To evaluate the breadth, validity, and presence of biases of the associations of vitamin D with diverse outcomes. Design Umbrella review of the evidence across systematic reviews and meta-analyses of observational studies of plasma 25-hydroxyvitamin D or 1,25-dihydroxyvitamin D concentrations and randomised controlled trials of vitamin D supplementation. Data sources Medline, Embase, and screening of citations and references. Eligibility criteria Three types of studies were eligible for the umbrella review: systematic reviews and meta-analyses that examined observational associations between circulating vitamin D concentrations and any clinical outcome; and meta-analyses of randomised controlled trials assessing supplementation with vitamin D or active compounds (both established and newer compounds of vitamin D). Results 107 systematic literature reviews and 74 meta-analyses of observational studies of plasma vitamin D concentrations and 87 meta-analyses of randomised controlled trials of vitamin D supplementation were identified. The relation between vitamin D and 137 outcomes has been explored, covering a wide range of skeletal, malignant, cardiovascular, autoimmune, infectious, metabolic, and other diseases. Ten outcomes were examined by both meta-analyses of observational studies and meta-analyses of randomised controlled trials, but the direction of the effect and level of statistical significance was concordant only for birth weight (maternal vitamin D status or supplementation). On the basis of the available evidence, an association between vitamin D concentrations and birth weight, dental caries in children, maternal vitamin D concentrations at term, and parathyroid hormone concentrations in patients with chronic kidney disease requiring dialysis is probable, but further studies and better designed trials are needed to draw firmer conclusions. In contrast to previous reports, evidence does not support the argument that vitamin D only supplementation increases bone mineral density or reduces the risk of fractures or falls in older people. Conclusions Despite a few hundred systematic reviews and meta-analyses, highly convincing evidence of a clear role of vitamin D does not exist for any outcome, but associations with a selection of outcomes are probable. PMID:24690624
A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves
NASA Astrophysics Data System (ADS)
Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang
2018-03-01
The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.
Helgesen, J.O.
1995-01-01
Surface-water-quality conditions and trends were assessed in the lower Kansas River Basin, which drains about 15,300 square miles of mainly agricultural land in southeast Nebraska and northeast Kansas. On the basis of established water-quality criteria, most streams in the basin were suitable for uses such as public-water supply, irrigation, and maintenance of aquatic life. However, most concerns identified from a previous analysis of available data through 1986 are substantiated by analysis of data for May 1987 through April 1990. Less-than-normal precipitation and runoff during 1987-90 affected surface-water quality and are important factors in the interpretation of results.Dissolved-solids concentrations in the main stem Kansas River during May 1987 through April 1990 commonly exceeded 500 milligrams per liter, which may be of concern for public-water supplies and for the irrigation of sensitive crops. Large concentrations of chloride in the Kansas River are derived from ground water discharging in the Smoky Hill River Basin west of the study unit. Trends of increasing concentrations of some dissolved major ions were statistically significant in the northwestern part of the study unit, which could reflect substantial increases in irrigated acreage.The largest concentrations of suspended sediment in streams during May 1987 through April 1990 were associated with high-density cropland in areas of little local relief and medium-density irrigated cropland in more dissected areas. The smallest concentrations were measured downstream from large reservoirs and in streams draining areas having little or no row-crop cultivation. Mean annual suspended-sediment transport rates in the main stem Kansas River increased substantially in the downstream direction. No conclusions could be reached concerning the relations of suspended-sediment transport, yields, or trends to natural and human factors.The largest sources of nitrogen and phosphorus in the study unit were fertilizer and livestock. Nitrate-nitrogen concentrations in stream-water samples did not exceed 10 milligams per liter; relatively large concentrations in the northwestern part of the study unit were associated with fertilizer application. Concentrations of total phosphorus generally were largest in the northwestern part of the study unit, which probably relates to the prevalence of cultivated land, fertilizer application, and livestock wastes.Deficiencies in dissolved-oxygen concentrations in streams occurred locally, as a result of discharges from wastewater-treatment plants, algal respiration, and inadequate reaeration associated with small streamflow. Large densities of a fecal-indicator bacterium, Escherichia coli, were associated with discharges from municipal wastewater-treatment plants and, especially in the northwestern part of the study unit, with transport of fecaThe largest concentrations of the herbicide atrazine generally were measured where the largest quantities of atrazine were applied to the land. Large atrazine concentrations, 10 to 20 micrograms per liter, were measured most frequently in unregulated principal streams during May and June. Downstream of reservoirs, the seasonal variability of atrazine concentrations was decreased compared to that of inflowing streams.
Fitzpatrick, Faith A.; Garrison, Paul J.; Fitzgerald, Sharon A.; Elder, John F.
2003-01-01
Sediment cores were collected from Musky Bay, Lac Courte Oreilles, and from surrounding areas in 1999 and 2001 to determine whether the water quality of Musky Bay has declined during the last 100 years or more as a result of human activity, specifically cottage development and cranberry farming. Selected cores were analyzed for sedimentation rates, nutrients, minor and trace elements, biogenic silica, diatom assemblages, and pollen over the past several decades. Two cranberry bogs constructed along Musky Bay in 1939 and the early 1950s were substantially expanded between 1950?62 and between 1980?98. Cottage development on Musky Bay has occurred at a steady rate since about 1930, although currently housing density on Musky Bay is one-third to one-half the housing density surrounding three other Lac Courte Oreilles bays. Sedimentation rates were reconstructed for a core from Musky Bay by use of three lead radioisotope models and the cesium-137 profile. The historical average mass and linear sedimentation rates for Musky Bay are 0.023 grams per square centimeter per year and 0.84 centimeters per year, respectively, for the period of about 1936?90. There is also limited evidence that sedimentation rates may have increased after the mid-1990s. Historical changes in input of organic carbon, nitrogen, phosphorus, and sulfur to Musky Bay could not be directly identified from concentration profiles of these elements because of the potential for postdepositional migration and recycling. Minor- and trace-element profiles from the Musky Bay core possibly reflect historical changes in the input of clastic material over time, as well as potential changes in atmospheric deposition inputs. The input of clastic material to the bay increased slightly after European settlement and possibly in the 1930s through 1950s. Concentrations of copper in the Musky Bay core increased steadily through the early to mid-1900s until about 1980 and appear to reflect inputs from atmospheric deposition. Aluminum- normalized concentrations of calcium, copper, nickel, and zinc increased in the Musky Bay core in the mid-1990s. However, concentrations of these elements in surficial sediment from Musky Bay were similar to concentrations in other Lac Courte Oreilles bays, nearby lakes, and soils and were below probable effects concentrations for aquatic life. Biogenic-silica, diatom-community, and pollen profiles indicate that Musky Bay has become more eutrophic since about 1940 with the onset of cottage development and cranberry farming. The water quality of the bay has especially degraded during the last 25 years with increased growth of aquatic plants and the onset of a floating algal mat during the last decade. Biogenic silica data indicate that diatom production has consistently increased since the 1930s. Diatom assemblage profiles indicate a shift from low-nutrient species to higher-nutrient species during the 1940s and that aquatic plants reached their present density and/or composition during the 1970s. The diatom Fragilaria capucina (indicative of algal mat) greatly increased during the mid-1990s. Pollen data indicate that milfoil, which often becomes more common with elevated nutrients, became more widespread after 1920. The pollen data also indicate that wild rice was present in the eastern end of Musky Bay during the late 1800s and the early 1900s but disappeared after about 1920, probably because of water-level changes more so than eutrophication.
Quantum mechanical probability current as electromagnetic 4-current from topological EM fields
NASA Astrophysics Data System (ADS)
van der Mark, Martin B.
2015-09-01
Starting from a complex 4-potential A = αdβ we show that the 4-current density in electromagnetism and the probability current density in relativistic quantum mechanics are of identical form. With the Dirac-Clifford algebra Cl1,3 as mathematical basis, the given 4-potential allows topological solutions of the fields, quite similar to Bateman's construction, but with a double field solution that was overlooked previously. A more general nullvector condition is found and wave-functions of charged and neutral particles appear as topological configurations of the electromagnetic fields.
First-passage problems: A probabilistic dynamic analysis for degraded structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1990-01-01
Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.
Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field
Ashtiani, Payam; Denison, Adelaide
2015-01-01
Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097
Horowitz, Arthur J.
2013-01-01
Hurricane Irene and Tropical Storm Lee, both of which made landfall in the U.S. between late August and early September 2011, generated record or near record water discharges in 41 coastal rivers between the North Carolina/South Carolina border and the U.S./Canadian border. Despite the discharge of substantial amounts of suspended sediment from many of these rivers, as well as the probable influx of substantial amounts of eroded material from the surrounding basins, the geochemical effects on the <63-µm fractions of the bed sediments appear relatively limited [<20% of the constituents determined (256 out of 1394)]. Based on surface area measurements, this lack of change occurred despite substantial alterations in both the grain size distribution and the composition of the bed sediments. The sediment-associated constituents which display both concentration increases and decreases include: total sulfur (TS), Hg, Ag, total organic carbon (TOC), total nitrogen (TN), Zn, Se, Co, Cu, Pb, As, Cr, and total carbon (TC). As a group, these constituents tend to be associated either with urbanization/elevated population densities and/or wastewater/solid sludge. The limited number of significant sediment-associated chemical changes that were detected probably resulted from two potential processes: (1) the flushing of in-stream land-use affected sediments that were replaced by baseline material more representative of local geology and/or soils (declining concentrations), and/or (2) the inclusion of more heavily affected material as a result of urban nonpoint-source runoff and/or releases from flooded treatment facilities (increasing concentrations). Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
Probabilistic Modeling of the Renal Stone Formation Module
NASA Technical Reports Server (NTRS)
Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.
2013-01-01
The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Causal illusions in children when the outcome is frequent
2017-01-01
Causal illusions occur when people perceive a causal relation between two events that are actually unrelated. One factor that has been shown to promote these mistaken beliefs is the outcome probability. Thus, people tend to overestimate the strength of a causal relation when the potential consequence (i.e. the outcome) occurs with a high probability (outcome-density bias). Given that children and adults differ in several important features involved in causal judgment, including prior knowledge and basic cognitive skills, developmental studies can be considered an outstanding approach to detect and further explore the psychological processes and mechanisms underlying this bias. However, the outcome density bias has been mainly explored in adulthood, and no previous evidence for this bias has been reported in children. Thus, the purpose of this study was to extend outcome-density bias research to childhood. In two experiments, children between 6 and 8 years old were exposed to two similar setups, both showing a non-contingent relation between the potential cause and the outcome. These two scenarios differed only in the probability of the outcome, which could either be high or low. Children judged the relation between the two events to be stronger in the high probability of the outcome setting, revealing that, like adults, they develop causal illusions when the outcome is frequent. PMID:28898294
NASA Astrophysics Data System (ADS)
Kogure, Toshihiro; Suzuki, Michio; Kim, Hyejin; Mukai, Hiroki; Checa, Antonio G.; Sasaki, Takenori; Nagasawa, Hiromichi
2014-07-01
{110} twin density in aragonites constituting various microstructures of molluscan shells has been characterized using X-ray diffraction (XRD) and transmission electron microscopy (TEM), to find the factors that determine the density in the shells. Several aragonite crystals of geological origin were also investigated for comparison. The twin density is strongly dependent on the microstructures and species of the shells. The nacreous structure has a very low twin density regardless of the shell classes. On the other hand, the twin density in the crossed-lamellar (CL) structure has large variation among classes or subclasses, which is mainly related to the crystallographic direction of the constituting aragonite fibers. TEM observation suggests two types of twin structures in aragonite crystals with dense {110} twins: rather regulated polysynthetic twins with parallel twin planes, and unregulated polycyclic ones with two or three directions for the twin planes. The former is probably characteristic in the CL structures of specific subclasses of Gastropoda. The latter type is probably related to the crystal boundaries dominated by (hk0) interfaces in the microstructures with preferred orientation of the c-axis, and the twin density is mainly correlated to the crystal size in the microstructures.
Phi, Thai Ha; Chinh, Pham Minh; Cuong, Doan Danh; Ly, Luong Thi Mai; Van Thinh, Nguyen; Thai, Phong K
2018-01-01
There is a need to assess the risk of exposure to metals via roadside dust in Vietnam where many people live along the road/highways and are constantly exposed to roadside dust. In this study, we collected dust samples at 55 locations along two major Highways in north-east Vietnam, which passed through different land use areas. Samples were sieved into three different particle sizes and analyzed for concentrations of eight metals using a X-ray fluorescence instrument. The concentrations and environmental indices (EF, I geo ) of metals were used to evaluate the degree of pollution in the samples. Among different land uses, industrial areas could be highly polluted with heavy metals in roadside dust, followed by commerce and power plants. Additionally, the traffic density probably played an important role; higher concentrations were observed in samples from Highway No. 5 where traffic is several times higher than Highway No. 18. According to the risk assessment, Cr poses the highest noncarcinogenic risk even though the health hazard index values of assessed heavy metals in this study were within the acceptable range. Our assessment also found that the risk of exposure to heavy metals through roadside dust is much higher for children than for adults.
Kim, N; Fergusson, J
1993-09-30
The amounts (microgram m-2) and concentrations (microgram g-1) of cadmium, copper, lead and zinc have been measured in house dust in Christchurch, New Zealand. For 120 houses surveyed the geometric mean concentrations of the four metals are 4.24 micrograms g-1, 165 micrograms g-1, 573 micrograms g-1 and 10,400 micrograms g-1, respectively. In addition eleven variables, such as house age, carpet wear and traffic density, were recorded for each property and the results analysed with respect to their effects on the amounts and concentrations of the four elements. The amounts of all the metals were highly correlated with the overall dustiness of the houses, which was found to be predominantly determined by the degree of carpet wear. No one dominant source of cadmium was identified, although several minor sources including carpet wear, galvanized iron roofs and red/orange/yellow coloured carpets were implicated. Petrol lead and lead-based paints were identified as significant sources of lead in house dust. Rubber carpet underlays or backings were identified as a significant source of zinc, with some contribution from galvanized iron roofs. Road traffic and probably the existence of a fire place appear to contribute to the copper levels.
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Technical Reports Server (NTRS)
Deal, J. H.
1975-01-01
One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.
Spacecraft Collision Avoidance
NASA Astrophysics Data System (ADS)
Bussy-Virat, Charles
The rapid increase of the number of objects in orbit around the Earth poses a serious threat to operational spacecraft and astronauts. In order to effectively avoid collisions, mission operators need to assess the risk of collision between the satellite and any other object whose orbit is likely to approach its trajectory. Several algorithms predict the probability of collision but have limitations that impair the accuracy of the prediction. An important limitation is that uncertainties in the atmospheric density are usually not taken into account in the propagation of the covariance matrix from current epoch to closest approach time. The Spacecraft Orbital Characterization Kit (SpOCK) was developed to accurately predict the positions and velocities of spacecraft. The central capability of SpOCK is a high accuracy numerical propagator of spacecraft orbits and computations of ancillary parameters. The numerical integration uses a comprehensive modeling of the dynamics of spacecraft in orbit that includes all the perturbing forces that a spacecraft is subject to in orbit. In particular, the atmospheric density is modeled by thermospheric models to allow for an accurate representation of the atmospheric drag. SpOCK predicts the probability of collision between two orbiting objects taking into account the uncertainties in the atmospheric density. Monte Carlo procedures are used to perturb the initial position and velocity of the primary and secondary spacecraft from their covariance matrices. Developed in C, SpOCK supports parallelism to quickly assess the risk of collision so it can be used operationally in real time. The upper atmosphere of the Earth is strongly driven by the solar activity. In particular, abrupt transitions from slow to fast solar wind cause important disturbances of the atmospheric density, hence of the drag acceleration that spacecraft are subject to. The Probability Distribution Function (PDF) model was developed to predict the solar wind speed five days in advance. In particular, the PDF model is able to predict rapid enhancements in the solar wind speed. It was found that 60% of the positive predictions were correct, while 91% of the negative predictions were correct, and 20% to 33% of the peaks in the speed were found by the model. En-semble forecasts provide the forecasters with an estimation of the uncertainty in the prediction, which can be used to derive uncertainties in the atmospheric density and in the drag acceleration. The dissertation then demonstrates that uncertainties in the atmospheric density result in large uncertainties in the prediction of the probability of collision. As an example, the effects of a geomagnetic storm on the probability of collision are illustrated. The research aims at providing tools and analyses that help understand and predict the effects of uncertainties in the atmospheric density on the probability of collision. The ultimate motivation is to support mission operators in making the correct decision with regard to a potential collision avoidance maneuver by providing an uncertainty on the prediction of the probability of collision instead of a single value. This approach can help avoid performing unnecessary costly maneuvers, while making sure that the risk of collision is fully evaluated.
Multiple model cardinalized probability hypothesis density filter
NASA Astrophysics Data System (ADS)
Georgescu, Ramona; Willett, Peter
2011-09-01
The Probability Hypothesis Density (PHD) filter propagates the first-moment approximation to the multi-target Bayesian posterior distribution while the Cardinalized PHD (CPHD) filter propagates both the posterior likelihood of (an unlabeled) target state and the posterior probability mass function of the number of targets. Extensions of the PHD filter to the multiple model (MM) framework have been published and were implemented either with a Sequential Monte Carlo or a Gaussian Mixture approach. In this work, we introduce the multiple model version of the more elaborate CPHD filter. We present the derivation of the prediction and update steps of the MMCPHD particularized for the case of two target motion models and proceed to show that in the case of a single model, the new MMCPHD equations reduce to the original CPHD equations.
Brownian Motion with Active Fluctuations
NASA Astrophysics Data System (ADS)
Romanczuk, Pawel; Schimansky-Geier, Lutz
2011-06-01
We study the effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions. We distinguish between passive and active fluctuations. Passive fluctuations (e.g., thermal fluctuations) are independent of the orientation of the particle. In contrast, active ones point parallel or perpendicular to the time dependent orientation of the particle. We derive analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles, which yields an increased probability of low speeds in the presence of active fluctuations in comparison to the case of purely passive fluctuations. As a consequence, we predict sharply peaked Cartesian velocity probability densities at the origin. Finally, we show that such a behavior may also occur in non-Gaussian active fluctuations and discuss briefly correlations of the fluctuating stochastic forces.
Measurement of the top-quark mass with dilepton events selected using neuroevolution at CDF.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shekhar, R; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Whiteson, S; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2009-04-17
We report a measurement of the top-quark mass M_{t} in the dilepton decay channel tt[over ] --> bl;{'+} nu_{l};{'}b[over ]l;{-}nu[over ]_{l}. Events are selected with a neural network which has been directly optimized for statistical precision in top-quark mass using neuroevolution, a technique modeled on biological evolution. The top-quark mass is extracted from per-event probability densities that are formed by the convolution of leading order matrix elements and detector resolution functions. The joint probability is the product of the probability densities from 344 candidate events in 2.0 fb;{-1} of pp[over ] collisions collected with the CDF II detector, yielding a measurement of M_{t} = 171.2 +/- 2.7(stat) +/- 2.9(syst) GeV / c;{2}.
NASA Astrophysics Data System (ADS)
Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2015-12-01
Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a limited number of particles.
The influence of synaptic size on AMPA receptor activation: a Monte Carlo model.
Montes, Jesus; Peña, Jose M; DeFelipe, Javier; Herreras, Oscar; Merchan-Perez, Angel
2015-01-01
Physiological and electron microscope studies have shown that synapses are functionally and morphologically heterogeneous and that variations in size of synaptic junctions are related to characteristics such as release probability and density of postsynaptic AMPA receptors. The present article focuses on how these morphological variations impact synaptic transmission. We based our study on Monte Carlo computational simulations of simplified model synapses whose morphological features have been extracted from hundreds of actual synaptic junctions reconstructed by three-dimensional electron microscopy. We have examined the effects that parameters such as synaptic size or density of AMPA receptors have on the number of receptors that open after release of a single synaptic vesicle. Our results indicate that the maximum number of receptors that will open after the release of a single synaptic vesicle may show a ten-fold variation in the whole population of synapses. When individual synapses are considered, there is also a stochastical variability that is maximal in small synapses with low numbers of receptors. The number of postsynaptic receptors and the size of the synaptic junction are the most influential parameters, while the packing density of receptors or the concentration of extrasynaptic transporters have little or no influence on the opening of AMPA receptors.
The Influence of Synaptic Size on AMPA Receptor Activation: A Monte Carlo Model
Montes, Jesus; Peña, Jose M.; DeFelipe, Javier; Herreras, Oscar; Merchan-Perez, Angel
2015-01-01
Physiological and electron microscope studies have shown that synapses are functionally and morphologically heterogeneous and that variations in size of synaptic junctions are related to characteristics such as release probability and density of postsynaptic AMPA receptors. The present article focuses on how these morphological variations impact synaptic transmission. We based our study on Monte Carlo computational simulations of simplified model synapses whose morphological features have been extracted from hundreds of actual synaptic junctions reconstructed by three-dimensional electron microscopy. We have examined the effects that parameters such as synaptic size or density of AMPA receptors have on the number of receptors that open after release of a single synaptic vesicle. Our results indicate that the maximum number of receptors that will open after the release of a single synaptic vesicle may show a ten-fold variation in the whole population of synapses. When individual synapses are considered, there is also a stochastical variability that is maximal in small synapses with low numbers of receptors. The number of postsynaptic receptors and the size of the synaptic junction are the most influential parameters, while the packing density of receptors or the concentration of extrasynaptic transporters have little or no influence on the opening of AMPA receptors. PMID:26107874
Polynomial chaos representation of databases on manifolds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu
2017-04-15
Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Morrison, Michael L.
1981-01-01
This study examines the foraging behavior and habitat selection of a MacGillivray's (Oporornis tolmiei)-Orange-crowned (Vermivora celata)-Wilson's (Wilsonia pusilla) warbler assemblage that occurred on early-growth clearcuts in western Oregon during breeding. Sites were divided into two groups based on the presence or absence of deciduous trees. Density estimates for each species were nearly identical between site classes except for Wilson's, whose density declined on nondeciduous tree sites. Analysis of vegetation parameters within the territories of the species identified deciduous tree cover as the variable of primary importance in the separation of warblers on each site, so that the assemblage could be arranged on a continuum of increasing deciduous tree cover. MacGillivray's and Wilson's extensively used shrub cover and deciduous tree cover, respectively; Orange-crowns were associated with both vegetation types. When the deciduous tree cover was reduced, Orange-crowns concentrated foraging activities in shrub cover and maintained nondisturbance densities. Indices of foraging-height diversity showed a marked decrease after the removal of deciduous trees. All species except MacGillivray's foraged lower in the vegatative substrate on the nondeciduous tree sites; MacGillivray's concentrated foraging activities in the low shrub cover on both sites. Indices of foraging overlap revealed a general pattern of decreased segregation by habitat after removal of deciduous trees. I suggest that the basic patterns of foraging behavior and habitat selection evidenced today in western North America were initially developed by ancestral warblers before their invasion of the west. Species successfully colonizing western habitats were probably preadapted to the conditions they encountered, with new habitats occupied without obvious evolutionary modifications.
Kenchington, Ellen; Murillo, Francisco Javier; Lirette, Camille; Sacau, Mar; Koen-Alonso, Mariano; Kenny, Andrew; Ollerhead, Neil; Wareham, Vonda; Beazley, Lindsay
2014-01-01
The United Nations General Assembly Resolution 61/105, concerning sustainable fisheries in the marine ecosystem, calls for the protection of vulnerable marine ecosystems (VME) from destructive fishing practices. Subsequently, the Food and Agriculture Organization (FAO) produced guidelines for identification of VME indicator species/taxa to assist in the implementation of the resolution, but recommended the development of case-specific operational definitions for their application. We applied kernel density estimation (KDE) to research vessel trawl survey data from inside the fishing footprint of the Northwest Atlantic Fisheries Organization (NAFO) Regulatory Area in the high seas of the northwest Atlantic to create biomass density surfaces for four VME indicator taxa: large-sized sponges, sea pens, small and large gorgonian corals. These VME indicator taxa were identified previously by NAFO using the fragility, life history characteristics and structural complexity criteria presented by FAO, along with an evaluation of their recovery trajectories. KDE, a non-parametric neighbour-based smoothing function, has been used previously in ecology to identify hotspots, that is, areas of relatively high biomass/abundance. We present a novel approach of examining relative changes in area under polygons created from encircling successive biomass categories on the KDE surface to identify "significant concentrations" of biomass, which we equate to VMEs. This allows identification of the VMEs from the broader distribution of the species in the study area. We provide independent assessments of the VMEs so identified using underwater images, benthic sampling with other gear types (dredges, cores), and/or published species distribution models of probability of occurrence, as available. For each VME indicator taxon we provide a brief review of their ecological function which will be important in future assessments of significant adverse impact on these habitats here and elsewhere.
Allert, A.L.; DiStefano, R.J.; Fairchild, J.F.; Schmitt, C.J.; McKee, M.J.; Girondo, J.A.; Brumbaugh, W.G.; May, T.W.
2013-01-01
The Big River (BGR) drains much of the Old Lead Belt mining district (OLB) in southeastern Missouri, USA, which was historically among the largest producers of lead–zinc (Pb–Zn) ore in the world. We sampled benthic fish and crayfish in riffle habitats at eight sites in the BGR and conducted 56-day in situ exposures to the woodland crayfish (Orconectes hylas) and golden crayfish (Orconectes luteus) in cages at four sites affected to differing degrees by mining. Densities of fish and crayfish, physical habitat and water quality, and the survival and growth of caged crayfish were examined at sites with no known upstream mining activities (i.e., reference sites) and at sites downstream of mining areas (i.e., mining and downstream sites). Lead, zinc, and cadmium were analyzed in surface and pore water, sediment, detritus, fish, crayfish, and other benthic macro-invertebrates. Metals concentrations in all materials analyzed were greater at mining and downstream sites than at reference sites. Ten species of fish and four species of crayfish were collected. Fish and crayfish densities were significantly greater at reference than mining or downstream sites, and densities were greater at downstream than mining sites. Survival of caged crayfish was significantly lower at mining sites than reference sites; downstream sites were not tested. Chronic toxic-unit scores and sediment probable effects quotients indicated significant risk of toxicity to fish and crayfish, and metals concentrations in crayfish were sufficiently high to represent a risk to wildlife at mining and downstream sites. Collectively, the results provided direct evidence that metals associated with historical mining activities in the OLB continue to affect aquatic life in the BGR.
LaMotte, A.E.; Greene, E.A.
2007-01-01
Spatial relations between land use and groundwater quality in the watershed adjacent to Assateague Island National Seashore, Maryland and Virginia, USA were analyzed by the use of two spatial models. One model used a logit analysis and the other was based on geostatistics. The models were developed and compared on the basis of existing concentrations of nitrate as nitrogen in samples from 529 domestic wells. The models were applied to produce spatial probability maps that show areas in the watershed where concentrations of nitrate in groundwater are likely to exceed a predetermined management threshold value. Maps of the watershed generated by logistic regression and probability kriging analysis showing where the probability of nitrate concentrations would exceed 3 mg/L (>0.50) compared favorably. Logistic regression was less dependent on the spatial distribution of sampled wells, and identified an additional high probability area within the watershed that was missed by probability kriging. The spatial probability maps could be used to determine the natural or anthropogenic factors that best explain the occurrence and distribution of elevated concentrations of nitrate (or other constituents) in shallow groundwater. This information can be used by local land-use planners, ecologists, and managers to protect water supplies and identify land-use planning solutions and monitoring programs in vulnerable areas. ?? 2006 Springer-Verlag.
Density PDFs of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2012-09-01
The probability distribution functions (PDFs) of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5∘ and |b|≥ 5∘ are considered separately. Our results provide strong support for the existence of a lognormal density PDF in the diffuse ISM, consistent with a turbulent origin of density structure in the diffuse gas.
NASA Astrophysics Data System (ADS)
Libera, Arianna; de Barros, Felipe P. J.; Riva, Monica; Guadagnini, Alberto
2017-10-01
Our study is keyed to the analysis of the interplay between engineering factors (i.e., transient pumping rates versus less realistic but commonly analyzed uniform extraction rates) and the heterogeneous structure of the aquifer (as expressed by the probability distribution characterizing transmissivity) on contaminant transport. We explore the joint influence of diverse (a) groundwater pumping schedules (constant and variable in time) and (b) representations of the stochastic heterogeneous transmissivity (T) field on temporal histories of solute concentrations observed at an extraction well. The stochastic nature of T is rendered by modeling its natural logarithm, Y = ln T, through a typical Gaussian representation and the recently introduced Generalized sub-Gaussian (GSG) model. The latter has the unique property to embed scale-dependent non-Gaussian features of the main statistics of Y and its (spatial) increments, which have been documented in a variety of studies. We rely on numerical Monte Carlo simulations and compute the temporal evolution at the well of low order moments of the solute concentration (C), as well as statistics of the peak concentration (Cp), identified as the environmental performance metric of interest in this study. We show that the pumping schedule strongly affects the pattern of the temporal evolution of the first two statistical moments of C, regardless the nature (Gaussian or non-Gaussian) of the underlying Y field, whereas the latter quantitatively influences their magnitude. Our results show that uncertainty associated with C and Cp estimates is larger when operating under a transient extraction scheme than under the action of a uniform withdrawal schedule. The probability density function (PDF) of Cp displays a long positive tail in the presence of time-varying pumping schedule. All these aspects are magnified in the presence of non-Gaussian Y fields. Additionally, the PDF of Cp displays a bimodal shape for all types of pumping schemes analyzed, independent of the type of heterogeneity considered.
Compact Groups analysis using weak gravitational lensing II: CFHT Stripe 82 data
NASA Astrophysics Data System (ADS)
Chalela, Martín; Gonzalez, Elizabeth Johana; Makler, Martín; Lambas, Diego García; Pereira, Maria E. S.; O'mill, Ana; Shan, HuanYuan
2018-06-01
In this work we present a lensing study of Compact Groups (CGs) using data obtained from the high quality Canada-France-Hawaii Telescope Stripe 82 Survey. Using stacking techniques we obtain the average density contrast profile. We analyse the lensing signal dependence on the groups surface brightness and morphological content, for CGs in the redshift range z = 0.2 - 0.4. We obtain a larger lensing signal for CGs with higher surface brightness, probably due to their lower contamination by interlopers. Also, we find a strong dependence of the lensing signal on the group concentration parameter, with the most concentrated quintile showing a significant lensing signal, consistent with an isothermal sphere with σV = 336 ± 28 km/s and a NFW profile with R200 = 0.60 ± 0.05 h_{70}^{-1}Mpc. We also compare lensing results with dynamical estimates finding a good agreement with lensing determinations for CGs with higher surface brightness and higher concentration indexes. On the other hand, CGs that are more contaminated by interlopers show larger dynamical dispersions, since interlopers bias dynamical estimates to larger values, although the lensing signal is weakened.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Spatial vent opening probability map of El Hierro Island (Canary Islands, Spain)
NASA Astrophysics Data System (ADS)
Becerril, Laura; Cappello, Annalisa; Galindo, Inés; Neri, Marco; Del Negro, Ciro
2013-04-01
The assessment of the probable spatial distribution of new eruptions is useful to manage and reduce the volcanic risk. It can be achieved in different ways, but it becomes especially hard when dealing with volcanic areas less studied, poorly monitored and characterized by a low frequent activity, as El Hierro. Even though it is the youngest of the Canary Islands, before the 2011 eruption in the "Las Calmas Sea", El Hierro had been the least studied volcanic Island of the Canaries, with more historically devoted attention to La Palma, Tenerife and Lanzarote. We propose a probabilistic method to build the susceptibility map of El Hierro, i.e. the spatial distribution of vent opening for future eruptions, based on the mathematical analysis of the volcano-structural data collected mostly on the Island and, secondly, on the submerged part of the volcano, up to a distance of ~10-20 km from the coast. The volcano-structural data were collected through new fieldwork measurements, bathymetric information, and analysis of geological maps, orthophotos and aerial photographs. They have been divided in different datasets and converted into separate and weighted probability density functions, which were then included in a non-homogeneous Poisson process to produce the volcanic susceptibility map. Future eruptive events on El Hierro is mainly concentrated on the rifts zones, extending also beyond the shoreline. The major probabilities to host new eruptions are located on the distal parts of the South and West rifts, with the highest probability reached in the south-western area of the West rift. High probabilities are also observed in the Northeast and South rifts, and the submarine parts of the rifts. This map represents the first effort to deal with the volcanic hazard at El Hierro and can be a support tool for decision makers in land planning, emergency plans and civil defence actions.
Sharps, Elwyn; Smart, Jennifer; Mason, Lucy R; Jones, Kate; Skov, Martin W; Garbutt, Angus; Hiddink, Jan G
2017-08-01
Conservation grazing for breeding birds needs to balance the positive effects on vegetation structure and negative effects of nest trampling. In the UK, populations of Common redshank Tringa totanus breeding on saltmarshes declined by >50% between 1985 and 2011. These declines have been linked to changes in grazing management. The highest breeding densities of redshank on saltmarshes are found in lightly grazed areas. Conservation initiatives have encouraged low-intensity grazing at <1 cattle/ha, but even these levels of grazing can result in high levels of nest trampling. If livestock distribution is not spatially or temporally homogenous but concentrated where and when redshank breed, rates of nest trampling may be much higher than expected based on livestock density alone. By GPS tracking cattle on saltmarshes and monitoring trampling of dummy nests, this study quantified (i) the spatial and temporal distribution of cattle in relation to the distribution of redshank nesting habitats and (ii) trampling rates of dummy nests. The distribution of livestock was highly variable depending on both time in the season and the saltmarsh under study, with cattle using between 3% and 42% of the saltmarsh extent and spending most their time on higher elevation habitat within 500 m of the sea wall, but moving further onto the saltmarsh as the season progressed. Breeding redshank also nest on these higher elevation zones, and this breeding coincides with the early period of grazing. Probability of nest trampling was correlated to livestock density and was up to six times higher in the areas where redshank breed. This overlap in both space and time of the habitat use of cattle and redshank means that the trampling probability of a nest can be much higher than would be expected based on standard measures of cattle density. Synthesis and applications : Because saltmarsh grazing is required to maintain a favorable vegetation structure for redshank breeding, grazing management should aim to keep livestock away from redshank nesting habitat between mid-April and mid-July when nests are active, through delaying the onset of grazing or introducing a rotational grazing system.
Diesel oil removal by immobilized Pseudoxanthomonas sp. RN402.
Nopcharoenkul, Wannarak; Netsakulnee, Parichat; Pinyakong, Onruthai
2013-06-01
Pseudoxanthomonas sp. RN402 was capable of degrading diesel, crude oil, n-tetradecane and n-hexadecane. The RN402 cells were immobilized on the surface of high-density polyethylene plastic pellets at a maximum cell density of 10(8) most probable number (MPN) g(-1) of plastic pellets. The immobilized cells not only showed a higher efficacy of diesel oil removal than free cells but could also degrade higher concentrations of diesel oil. The rate of diesel oil removal by immobilized RN402 cells in liquid culture was 1,050 mg l(-1) day(-1). Moreover, the immobilized cells could maintain high efficacy and viability throughout 70 cycles of bioremedial treatment of diesel-contaminated water. The stability of diesel oil degradation in the immobilized cells resulted from the ability of living RN402 cells to attach to material surfaces by biofilm formation, as was shown by CLSM imaging. These characteristics of the immobilized RN402 cells, including high degradative efficacy, stability and flotation, make them suitable for the purpose of continuous wastewater bioremediation.
Schmidt, Debra A; Ellersieck, Mark R; Cranfield, Michael R; Karesh, William B
2006-09-01
Cholesterol concentrations in captive gorillas and orangutans vary widely within species and average approximately 244 mg/dl for gorillas and 169 mg/dl for orangutans as published previously. The International Species Inventory System reports higher concentrations of 275 and 199 mg/dl for gorillas and orangutans, respectively. It is unknown whether these values were typical, influenced by captive management, or both. To answer this question, banked serum samples from free-ranging mountain gorillas (Gorilla beringei), western lowland gorillas (Gorilla gorilla gorilla), and Bornean orangutans (Pongo pygmaeus) were analyzed for total cholesterol, triglyceride, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol concentrations. Mountain gorillas did not differ significantly from free-ranging western lowland gorillas in cholesterol, triglyceride, high-density lipoprotein cholesterol, or low-density lipoprotein cholesterol concentrations, indicating mountain gorilla values could be a model for western lowland gorillas. Captive gorilla total cholesterol and low-density lipoprotein cholesterol concentrations were significantly higher (P < 0.05) than in free-ranging groups. Triglyceride concentrations for captive gorillas were significantly higher (P < 0.05) than the male mountain and western lowland gorillas, but they were not significantly different from the female mountain gorillas. Captive orangutan total cholesterol concentrations were only higher (P < 0.05) than the free-ranging female orangutans, whereas captive orangutan low-density lipoprotein cholesterol concentrations were significantly higher (P < 0.05) than both free-ranging male and female orangutans. Calculated and measured low-density lipoprotein cholesterol concentrations were compared for all free-ranging animals and were significantly different (P < 0.05) for all groups, indicating Friedewald's equation for calculating low-density lipoprotein cholesterol is not appropriate for use with nonfasted apes. The higher total cholesterol and low-density lipoprotein cholesterol concentrations in captive apes may predispose them to cardiovascular disease and might be attributed to diets, limited energy expenditure, and genetics.
NASA Astrophysics Data System (ADS)
Benda, L. E.
2009-12-01
Stochastic geomorphology refers to the interaction of the stochastic field of sediment supply with hierarchically branching river networks where erosion, sediment flux and sediment storage are described by their probability densities. There are a number of general principles (hypotheses) that stem from this conceptual and numerical framework that may inform the science of erosion and sedimentation in river basins. Rainstorms and other perturbations, characterized by probability distributions of event frequency and magnitude, stochastically drive sediment influx to channel networks. The frequency-magnitude distribution of sediment supply that is typically skewed reflects strong interactions among climate, topography, vegetation, and geotechnical controls that vary between regions; the distribution varies systematically with basin area and the spatial pattern of erosion sources. Probability densities of sediment flux and storage evolve from more to less skewed forms downstream in river networks due to the convolution of the population of sediment sources in a watershed that should vary with climate, network patterns, topography, spatial scale, and degree of erosion asynchrony. The sediment flux and storage distributions are also transformed downstream due to diffusion, storage, interference, and attrition. In stochastic systems, the characteristically pulsed sediment supply and transport can create translational or stationary-diffusive valley and channel depositional landforms, the geometries of which are governed by sediment flux-network interactions. Episodic releases of sediment to the network can also drive a system memory reflected in a Hurst Effect in sediment yields and thus in sedimentological records. Similarly, discreet events of punctuated erosion on hillslopes can lead to altered surface and subsurface properties of a population of erosion source areas that can echo through time and affect subsequent erosion and sediment flux rates. Spatial patterns of probability densities have implications for the frequency and magnitude of sediment transport and storage and thus for the formation of alluvial and colluvial landforms throughout watersheds. For instance, the combination and interference of probability densities of sediment flux at confluences creates patterns of riverine heterogeneity, including standing waves of sediment with associated age distributions of deposits that can vary from younger to older depending on network geometry and position. Although the watershed world of probability densities is rarified and typically confined to research endeavors, it has real world implications for the day-to-day work on hillslopes and in fluvial systems, including measuring erosion, sediment transport, mapping channel morphology and aquatic habitats, interpreting deposit stratigraphy, conducting channel restoration, and applying environmental regulations. A question for the geomorphology community is whether the stochastic framework is useful for advancing our understanding of erosion and sedimentation and whether it should stimulate research to further develop, refine and test these and other principles. For example, a changing climate should lead to shifts in probability densities of erosion, sediment flux, storage, and associated habitats and thus provide a useful index of climate change in earth science forecast models.
Water quality in the Little Sac River basin near Springfield, Missouri, 1999-2001
Smith, Brenda J.
2002-01-01
The Little Sac River, north of Springfield, Missouri, flows through mainly agricultural and forest land. However, the quality of the river water is a concern because the river flows into Stockton Lake, which is a supplemental drinking water source for Springfield. Large bacterial densities and nutrient concentrations are primary concerns to the water quality of the river.A 29-river mile reach of the Little Sac River is on the 1998 list of waters of Missouri designated under section 303(d) of the Federal Clean Water Act because of fecal coliform densities larger than the Missouri Department of Natural Resources standard (hereinafter referred to as Missouri standard) of 200 colonies per 100 milliliters for whole-body contact recreation. During an investigation of the water quality in the Little Sac River by the U.S. Geological Survey, in cooperation with the Watershed Committee of the Ozarks, fecal coliform bacteria densities exceeded the Missouri standard (the standard applies from April 1 through October 31) in one sample from a site near Walnut Grove. At other sites on the Little Sac River, the Missouri standard was exceeded in two samples and equalled in one sample upstream from the Northwest Wastewater Treatment Plant (NW WTP) and in one sample immediately downstream from the NW WTP.Effluent from the NW WTP flows into the Little Sac River. Annually from April 1 through October 31, the effluent is disinfected to meet the Missouri standard for whole-body contact recreation. Fecal coliform bacteria densities in samples collected during this period generally were less than 100 colonies per 100 milliliters. For the rest of the year when the effluent was not disinfected, the bacteria densities in samples ranged from 50 (sample collected on November 1, 2000) to 10,100 colonies per 100 milliliters (both counts were non-ideal). When the effluent was disinfected and the fecal coliform bacteria density was small, samples from sites upstream and downstream from the NW WTP had a bacteria density larger than the density in the effluent. Other sources of bacteria are likely to be present in the study area in addition to the NW WTP. These potential sources include effluent from domestic septic systems and animal wastes.Nutrient concentrations in the Little Sac River immediately downstream from the NW WTP were affected by effluent from the NW WTP and possibly other sources. At two sites upstream from the NW WTP, median nitrite plus nitrate concentrations were 1.1 and 1.4 milligrams per liter. The median nitrite plus nitrate concentration for the effluent from the NW WTP was 6.4 milligrams per liter, and the median concentration decreased downstream in the Little Sac River to 2.2, 1.2, and 0.56 milligrams per liter.The effects of the effluent from the NW WTP on the water quality of the Little Sac River downstream from the NW WTP were reflected in an increase in discharge (effluent from the NW WTP can be as much as 50 percent of the flow at the site about 1.5 river miles downstream from the NW WTP), an increase in specific conductance values, an increase in several inorganic constituent concentrations, including calcium, magnesium, and sulfate, and a large increase in sodium and chloride concentrations. The effluent from the NW WTP seemed to have no effect on the pH value, temperature, and dissolved oxygen concentrations in the Little Sac River.Results of repetitive element polymerase chain reaction (rep-PCR) pattern analysis indicated that most Escherichia coli (E. coli) bacteria in water samples probably were from nonhuman sources, such as horses and cattle. The rep-PCR pattern analysis indicated that horses were an important source of E. coli downstream from the NW WTP, which was consistent with horses pastured adjacent to the sampling site. Fecal coliform bacteria loads increased upstream from the NW WTP from the most upstream site to the site immediately upstream from the NW WTP. Loads in the effluent from the NW WTP and also tho
Assess arsenic distribution in groundwater applying GIS in capital of Punjab, Pakistan
NASA Astrophysics Data System (ADS)
Akhtar, M. M.; Zhonghua, T.; Sissou, Z.; Mohamadi, B.
2015-03-01
Arsenic contamination of groundwater resources threatens the health of millions of people worldwide, particularly in the densely populated river deltas of Southeast Asia. Arsenic causes health concerns due to its significant toxicity and worldwide presence in portable water. The major sources of arsenic pollution may be natural process such as dissolution of arsenic containing minerals and anthropogenic activities. Lahore is groundwater dependent city, arsenic contamination is a major issue of portable water and has recently been most environmental health management issue especially in the plain region, where population density is very high. GIS was used in this study for visualizing distribution of arsenic groundwater concentration through geostatistics analysis technique, and exposure risk zones for two years (2010 and 2012). Town's data was compared and concentration variation evaluated. ANOVA test was also applied to compare concentration between cities and years. Arsenic concentrations widely range 7.3-67.8 and 5.2-69.3 μg L-1 in 2010 and 2012, respectively. Over 71% area is represented arsenic concentration range from 20 to 30 μg L-1 in both analyzed years. However, in 2012 arsenic concentration over 40 μg L-1 has covered 7.6% area of Data Gunjbuksh and 8.1% of Ravi Town, while over 90% area of Allama Iqbal, Aziz Bhatti and Samanabad Town contain arsenic concentration between 20-30 μg L-1. ANOVA test depicts concentration probability less than 0.05, while differences were detected among towns. In light of current results, it needs urgent step to ensure groundwater protection and preservation for future.
NASA Astrophysics Data System (ADS)
Mastrolorenzo, G.; Pappalardo, L.; Troise, C.; Panizza, A.; de Natale, G.
2005-05-01
Integrated volcanological-probabilistic approaches has been used in order to simulate pyroclastic density currents and fallout and produce hazard maps for Campi Flegrei and Somma Vesuvius areas. On the basis of the analyses of all types of pyroclastic flows, surges, secondary pyroclastic density currents and fallout events occurred in the volcanological history of the two volcanic areas and the evaluation of probability for each type of events, matrixs of input parameters for a numerical simulation have been performed. The multi-dimensional input matrixs include the main controlling parameters of the pyroclasts transport and deposition dispersion, as well as the set of possible eruptive vents used in the simulation program. Probabilistic hazard maps provide of each points of campanian area, the yearly probability to be interested by a given event with a given intensity and resulting demage. Probability of a few events in one thousand years are typical of most areas around the volcanoes whitin a range of ca 10 km, including Neaples. Results provide constrains for the emergency plans in Neapolitan area.
Coulomb Impurity Potential RbCl Quantum Pseudodot Qubit
NASA Astrophysics Data System (ADS)
Ma, Xin-Jun; Qi, Bin; Xiao, Jing-Lin
2015-08-01
By employing a variational method of Pekar type, we study the eigenenergies and the corresponding eigenfunctions of the ground and the first-excited states of an electron strongly coupled to electron-LO in a RbCl quantum pseudodot (QPD) with a hydrogen-like impurity at the center. This QPD system may be used as a two-level quantum qubit. The expressions of electron's probability density versus time and the coordinates, and the oscillating period versus the Coulombic impurity potential and the polaron radius have been derived. The investigated results indicate ① that the probability density of the electron oscillates in the QPD with a certain oscillating period of , ② that due to the presence of the asymmetrical potential in the z direction of the RbCl QPD, the electron probability density shows double-peak configuration, whereas there is only one peak if the confinement is a two-dimensional symmetric structure in the xy plane of the QPD, ③ that the oscillation period is a decreasing function of the Coulombic impurity potential, whereas it is an increasing one of the polaron radius.
Hou, Xiao-bin; Hu, Yong-cheng; He, Jin-quan
2013-02-01
To investigate the feasibility of determining the surface density of arginine-glycine-aspartic acid (RGD) peptides grafted onto allogeneic bone by an isotopic tracing method involving labeling these peptides with (125) I, evaluating the impact of the input concentration of RGD peptides on surface density and establishing the correlation between surface density and their input concentration. A synthetic RGD-containing polypeptide (EPRGDNYR) was labeled with (125) I and its specific radioactivity calculated. Reactive solutions of RGD peptide with radioactive (125) I-RGD as probe with input concentrations of 0.01 mg/mL, 0.10 mg/mL, 0.50 mg/mL, 1.00 mg/mL, 2.00 mg/mL and 4.00 mg/mL were prepared. Using 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide as a cross-linking agent, reactions were induced by placing allogeneic bone fragments into reactive solutions of RGD peptide of different input concentrations. On completion of the reactions, the surface densities of RGD peptides grafted onto the allogeneic bone fragments were calculated by evaluating the radioactivity and surface areas of the bone fragments. The impact of input concentration of RGD peptides on surface density was measured and a curve constructed. Measurements by a radiodensity γ-counter showed that the RGD peptides had been labeled successfully with (125) I. The allogeneic bone fragments were radioactive after the reaction, demonstrating that the RGD peptides had been successfully grafted onto their surfaces. It was also found that with increasing input concentration, the surface density increased. It was concluded that the surface density of RGD peptides is quantitatively related to their input concentration. With increasing input concentration, the surface density gradually increases to saturation value. © 2013 Chinese Orthopaedic Association and Wiley Publishing Asia Pty Ltd.
Does probability of occurrence relate to population dynamics?
Thuiller, Wilfried; Münkemüller, Tamara; Schiffers, Katja H.; Georges, Damien; Dullinger, Stefan; Eckhart, Vincent M.; Edwards, Thomas C.; Gravel, Dominique; Kunstler, Georges; Merow, Cory; Moore, Kara; Piedallu, Christian; Vissault, Steve; Zimmermann, Niklaus E.; Zurell, Damaris; Schurr, Frank M.
2014-01-01
Hutchinson defined species' realized niche as the set of environmental conditions in which populations can persist in the presence of competitors. In terms of demography, the realized niche corresponds to the environments where the intrinsic growth rate (r) of populations is positive. Observed species occurrences should reflect the realized niche when additional processes like dispersal and local extinction lags do not have overwhelming effects. Despite the foundational nature of these ideas, quantitative assessments of the relationship between range-wide demographic performance and occurrence probability have not been made. This assessment is needed both to improve our conceptual understanding of species' niches and ranges and to develop reliable mechanistic models of species geographic distributions that incorporate demography and species interactions.The objective of this study is to analyse how demographic parameters (intrinsic growth rate r and carrying capacity K ) and population density (N ) relate to occurrence probability (Pocc ). We hypothesized that these relationships vary with species' competitive ability. Demographic parameters, density, and occurrence probability were estimated for 108 tree species from four temperate forest inventory surveys (Québec, western USA, France and Switzerland). We used published information of shade tolerance as indicators of light competition strategy, assuming that high tolerance denotes high competitive capacity in stable forest environments.Interestingly, relationships between demographic parameters and occurrence probability did not vary substantially across degrees of shade tolerance and regions. Although they were influenced by the uncertainty in the estimation of the demographic parameters, we found that r was generally negatively correlated with Pocc, while N, and for most regions K, was generally positively correlated with Pocc. Thus, in temperate forest trees the regions of highest occurrence probability are those with high densities but slow intrinsic population growth rates. The uncertain relationships between demography and occurrence probability suggests caution when linking species distribution and demographic models.
Representation of layer-counted proxy records as probability densities on error-free time axes
NASA Astrophysics Data System (ADS)
Boers, Niklas; Goswami, Bedartha; Ghil, Michael
2016-04-01
Time series derived from paleoclimatic proxy records exhibit substantial dating uncertainties in addition to the measurement errors of the proxy values. For radiometrically dated proxy archives, Goswami et al. [1] have recently introduced a framework rooted in Bayesian statistics that successfully propagates the dating uncertainties from the time axis to the proxy axis. The resulting proxy record consists of a sequence of probability densities over the proxy values, conditioned on prescribed age values. One of the major benefits of this approach is that the proxy record is represented on an accurate, error-free time axis. Such unambiguous dating is crucial, for instance, in comparing different proxy records. This approach, however, is not directly applicable to proxy records with layer-counted chronologies, as for example ice cores, which are typically dated by counting quasi-annually deposited ice layers. Hence the nature of the chronological uncertainty in such records is fundamentally different from that in radiometrically dated ones. Here, we introduce a modification of the Goswami et al. [1] approach that is specifically designed for layer-counted proxy records, instead of radiometrically dated ones. We apply our method to isotope ratios and dust concentrations in the NGRIP core, using a published 60,000-year chronology [2]. It is shown that the further one goes into the past, the more the layer-counting errors accumulate and lead to growing uncertainties in the probability density sequence for the proxy values that results from the proposed approach. For the older parts of the record, these uncertainties affect more and more a statistically sound estimation of proxy values. This difficulty implies that great care has to be exercised when comparing and in particular aligning specific events among different layer-counted proxy records. On the other hand, when attempting to derive stochastic dynamical models from the proxy records, one is only interested in the relative changes, i.e. in the increments of the proxy values. In such cases, only the relative (non-cumulative) counting errors matter. For the example of the NGRIP records, we show that a precise estimation of these relative changes is in fact possible. References: [1] Goswami et al., Nonlin. Processes Geophys. (2014) [2] Svensson et al., Clim. Past (2008)
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
Summary of intrinsic and extrinsic factors affecting detection probability of marsh birds
Conway, C.J.; Gibbs, J.P.
2011-01-01
Many species of marsh birds (rails, bitterns, grebes, etc.) rely exclusively on emergent marsh vegetation for all phases of their life cycle, and many organizations have become concerned about the status and persistence of this group of birds. Yet, marsh birds are notoriously difficult to monitor due to their secretive habits. We synthesized the published and unpublished literature and summarized the factors that influence detection probability of secretive marsh birds in North America. Marsh birds are more likely to respond to conspecific than heterospecific calls, and seasonal peak in vocalization probability varies among co-existing species. The effectiveness of morning versus evening surveys varies among species and locations. Vocalization probability appears to be positively correlated with density in breeding Virginia Rails (Rallus limicola), Soras (Porzana carolina), and Clapper Rails (Rallus longirostris). Movement of birds toward the broadcast source creates biases when using count data from callbroadcast surveys to estimate population density. Ambient temperature, wind speed, cloud cover, and moon phase affected detection probability in some, but not all, studies. Better estimates of detection probability are needed. We provide recommendations that would help improve future marsh bird survey efforts and a list of 14 priority information and research needs that represent gaps in our current knowledge where future resources are best directed. ?? Society of Wetland Scientists 2011.
Assessing Aircraft Supply Air to Recommend Compounds for Timely Warning of Contamination
NASA Astrophysics Data System (ADS)
Fox, Richard B.
Taking aircraft out of service for even one day to correct fume-in-cabin events can cost the industry roughly $630 million per year in lost revenue. The quantitative correlation study investigated quantitative relationships between measured concentrations of contaminants in bleed air and probability of odor detectability. Data were collected from 94 aircraft engine and auxiliary power unit (APU) bleed air tests from an archival data set between 1997 and 2011, and no relationships were found. Pearson correlation was followed by regression analysis for individual contaminants. Significant relationships of concentrations of compounds in bleed air to probability of odor detectability were found (p<0.05), as well as between compound concentration and probability of sensory irritancy detectability. Study results may be useful to establish early warning levels. Predictive trend monitoring, a method to identify potential pending failure modes within a mechanical system, may influence scheduled down-time for maintenance as a planned event, rather than repair after a mechanical failure and thereby reduce operational costs associated with odor-in-cabin events. Twenty compounds (independent variables) were found statistically significant as related to probability of odor detectability (dependent variable 1). Seventeen compounds (independent variables) were found statistically significant as related to probability of sensory irritancy detectability (dependent variable 2). Additional research was recommended to further investigate relationships between concentrations of contaminants and probability of odor detectability or probability of sensory irritancy detectability for all turbine oil brands. Further research on implementation of predictive trend monitoring may be warranted to demonstrate how the monitoring process might be applied to in-flight application.
Self-Organization in 2D Traffic Flow Model with Jam-Avoiding Drive
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-04-01
A stochastic cellular automaton (CA) model is presented to investigate the traffic jam by self-organization in the two-dimensional (2D) traffic flow. The CA model is the extended version of the 2D asymmetric exclusion model to take into account jam-avoiding drive. Each site contains either a car moving to the up, a car moving to the right, or is empty. A up car can shift right with probability p ja if it is blocked ahead by other cars. It is shown that the three phases (the low-density phase, the intermediate-density phase and the high-density phase) appear in the traffic flow. The intermediate-density phase is characterized by the right moving of up cars. The jamming transition to the high-density jamming phase occurs with higher density of cars than that without jam-avoiding drive. The jamming transition point p 2c increases with the shifting probability p ja. In the deterministic limit of p ja=1, it is found that a new jamming transition occurs from the low-density synchronized-shifting phase to the high-density moving phase with increasing density of cars. In the synchronized-shifting phase, all up cars do not move to the up but shift to the right by synchronizing with the move of right cars. We show that the jam-avoiding drive has an important effect on the dynamical jamming transition.
Durakoğlugil, Murtaza Emre; Ayaz, Teslime; Kocaman, Sinan Altan; Kırbaş, Aynur; Durakoğlugil, Tuğba; Erdoğan, Turan; Çetin, Mustafa; Şahin, Osman Zikrullah; Çiçek, Yüksel
2015-01-01
Objective: Catestatin has several cardiovascular actions, in addition to diminished sympatho-adrenal flow. Decreased plasma catestatin levels may reflect a predisposition for the development of hypertension and metabolic disorders. We planned to investigate the possible roles of catestatin in untreated hypertensive patients. As a secondary objective, we compared catestatin concentrations of healthy subjects with those of hypertensive patients in order to understand whether catestatin is increased reactively or diminished at onset. Methods: Our study was cross-sectional and observational. The patient group, comprising 109 consecutive untreated hypertensive patients without additional systemic or coronary heart disease, underwent evaluations of plasma catestatin, waist circumference, lipid parameters, left ventricular mass, carotid intima-media thickness, and flow-mediated dilation of the brachial artery. Additionally, we measured catestatin concentrations of 38 apparently healthy subjects without any disease using a commercial enzyme-linked immunosorbent assay kit. Results: We documented increased catestatin concentrations in previously untreated hypertensive patients compared to healthy controls (2.27±0.83 vs. 1.92±0.49 ng/mL, p=0.004). However, this association became insignificant after adjustments for age, gender, height, and weight. Within the patient group, catestatin levels were significantly higher in females. Among all study parameters, age, high-density lipoprotein cholesterol (HDL-C) correlated positively to plasma catestatin, whereas triglycerides, hemoglobin, and left ventricular mass correlated negatively to plasma catestatin. We could not detect an association between vascular parameters and catestatin. Catestatin levels were significantly elevated with increasing HDL-C (1.91±0.37, 2.26±0.79, and 3.1±1.23 ng/mL in patients with HDL-C <40, 40-60, and >60 mg/dL, respectively). Multiple linear regression analysis revealed age (beta: 0.201, p=0.041) and HDL-C (beta: 0.390, p<0.001) as independent correlates of plasma catestatin concentration. Additionally, male gender (beta:-0.330, p=0.001) and plasma catestatin (beta: 0.299, p=0.002) were significantly associated with HDL-C concentrations. Conclusion: We documented that plasma catestatin is an independent predictor of high-density lipoprotein cholesterol. In addition to antihypertensive effects, catestatin appears to be related to improved lipid and metabolic profiles. Coexistence of low catestatin levels with low HDL-C may provide a probable mechanism for the predictive value of low HDL-C for increased hypertension and cardiovascular events. PMID:25538000
Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-06-15
The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.
Single-molecule stochastic times in a reversible bimolecular reaction
NASA Astrophysics Data System (ADS)
Keller, Peter; Valleriani, Angelo
2012-08-01
In this work, we consider the reversible reaction between reactants of species A and B to form the product C. We consider this reaction as a prototype of many pseudobiomolecular reactions in biology, such as for instance molecular motors. We derive the exact probability density for the stochastic waiting time that a molecule of species A needs until the reaction with a molecule of species B takes place. We perform this computation taking fully into account the stochastic fluctuations in the number of molecules of species B. We show that at low numbers of participating molecules, the exact probability density differs from the exponential density derived by assuming the law of mass action. Finally, we discuss the condition of detailed balance in the exact stochastic and in the approximate treatment.
Probability density function approach for compressible turbulent reacting flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.
1994-01-01
The objective of the present work is to extend the probability density function (PDF) tubulence model to compressible reacting flows. The proability density function of the species mass fractions and enthalpy are obtained by solving a PDF evolution equation using a Monte Carlo scheme. The PDF solution procedure is coupled with a compression finite-volume flow solver which provides the velocity and pressure fields. A modeled PDF equation for compressible flows, capable of treating flows with shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed. Two super sonic diffusion flames are studied using the proposed PDF model and the results are compared with experimental data; marked improvements over solutions without PDF are observed.
Univariate Probability Distributions
ERIC Educational Resources Information Center
Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.
2012-01-01
We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…
Roadside and in-vehicle concentrations of monoaromatic hydrocarbons
NASA Astrophysics Data System (ADS)
Leung, Pei-Ling; Harrison, Roy M.
Airborne concentrations of benzene, toluene and the xylenes have been measured inside passenger cars whilst driven along major roads in the city of Birmingham, UK, as well as immediately outside the car, and at the roadside. A comparison of concentrations measured in the car with those determined from immediately outside showed little difference, with a mean ratio for benzene of 1.17±0.34 and for toluene 1.11±0.16 ( n=53). The ratio of in-car to roadside concentration was rather higher at 1.55±0.68 for benzene and 1.54±0.72 for toluene ( n=53). The roadside concentrations were typically several-fold higher than those measured at a background suburban monitoring station within Birmingham, although much variation was seen between congested and uncongested roads, with concentrations adjacent to uncongested roads similar to those measured at the background monitoring station. Measurements of benzene and toluene in a car driven on a rural road outside the city showed very comparable in-car and out-of-car concentrations strengthening the conclusion that pollution inside the car is derived from pollutants outside entering with ventilation air. The exceptions were an older car where in-car concentrations appreciably exceeded those outside (in-to out-vehicle ratio=2.3 for benzene and 2.2 for toluene where n=5) indicating probable self-contamination, and a very new car which built up increased VOC concentrations when stationary without ventilation (in-to out-vehicle ratio=2.4 for benzene and 3.3 for toluene where n=5). A further set of measurements inside London taxi cabs showed concentrations to be influenced by the area within which the taxi was driven, the traffic density and the presence of passengers smoking cigarettes.
Site-specific lead exposure from lead pellet ingestion in sentinel mallards
Rocke, T.E.; Brand, C.J.; Mensik, John G.
1997-01-01
We monitored lead poisoning from the ingestion of spent lead pellets in sentinel mallards (Anas platyhrynchos) at the Sacramento National Wildlife Refuge (SNWR), Willows, California for 4 years (1986-89) after the conversion to steel shot for waterfowl hunting on refuges in 1986. Sentinel mallards were held in 1.6-ha enclosures in 1 hunted (P8) and 2 non-hunted (T19 and TF) wetlands. We compared site-specific rates of lead exposure, as determined by periodic measurement of blood lead concentrations, and lead poisoning mortality between wetlands with different lead pellet densities, between seasons, and between male and female sentinels. In 1986, the estimated 2-week rate of lead exposure was significantly higher (P < 0.005) in P8 (43.8%), the wetland with the highest density of spent lead pellets (>2,000,000 pellets/ha), than in those with lower densities of lead pellets, T19 (18.1%; 173,200 pellets/ha) and TF (0.9%; 15,750 pellets/ha). The probability of mortality from lead poisoning was also significantly higher (P < 0.01) in sentinel mallards enclosed in P8 (0.25) than T19 (0) and TF (0) in 1986 and remained significantly higher (P < 0.001) during the 4-year study. Both lead exposure and the probability of lead poisoning mortality in P8 were significantly higher (P < 0.001) in the fall of 1986 (43.8%; 0.25), before hunting season, than in the spring of 1987 (21.6%; 0.04), after hunting season. We found no significant differences in the rates of lead exposure or lead poisoning mortality between male and female sentinel mallards. The results of this study demonstrate that in some locations, lead exposure and lead poisoning in waterfowl will continue to occur despite the conversion to steel shot for waterfowl hunting.
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.
Large Scale Data Analysis and Knowledge Extraction in Communication Data
2017-03-31
this purpose, we developed a novel method the " Correlation Density Ran!C’ which finds probability density distribution of related frequent event on all...which is called " Correlation Density Rank", is developed to derive the community tree from the network. As in the real world, where a network is...Community Structure in Dynamic Social Networks using the Correlation Density Rank," 2014 ASE BigData/SocialCom/Cybersecurity Conference, Stanford
Hydrogen and Sulfur from Hydrogen Sulfide. 5. Anodic Oxidation of Sulfur on Activated Glassy Carbon
1988-12-05
electrolyses of H S can probably be carried out at high rates with modest cell voltages in the range 1-1.5 V. The variation in anode current densities...of H2S from solutions of NaSH in aqueous NaOH was achieved using suitably ac- tivated glassy carbon anodes. Thus electrolyses of H2S can probably be...passivation by using a basic solvent at 850C. Using an H2S-saturated 6M NaOH solution, they conducted electrolyses for extended periods at current densities
Continuation of probability density functions using a generalized Lyapunov approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baars, S., E-mail: s.baars@rug.nl; Viebahn, J.P., E-mail: viebahn@cwi.nl; Mulder, T.E., E-mail: t.e.mulder@uu.nl
Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
NASA Technical Reports Server (NTRS)
Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard
1988-01-01
The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.
Exact joint density-current probability function for the asymmetric exclusion process.
Depken, Martin; Stinchcombe, Robin
2004-07-23
We study the asymmetric simple exclusion process with open boundaries and derive the exact form of the joint probability function for the occupation number and the current through the system. We further consider the thermodynamic limit, showing that the resulting distribution is non-Gaussian and that the density fluctuations have a discontinuity at the continuous phase transition, while the current fluctuations are continuous. The derivations are performed by using the standard operator algebraic approach and by the introduction of new operators satisfying a modified version of the original algebra. Copyright 2004 The American Physical Society
Kennedy, Theodore A.; Yackulic, Charles B.; Cross, Wyatt F.; Grams, Paul E.; Yard, Michael D.; Copp, Adam J.
2014-01-01
1. Invertebrate drift is a fundamental process in streams and rivers. Studies from laboratory experiments and small streams have identified numerous extrinsic (e.g. discharge, light intensity, water quality) and intrinsic factors (invertebrate life stage, benthic density, behaviour) that govern invertebrate drift concentrations (# m−3), but the factors that govern invertebrate drift in larger rivers remain poorly understood. For example, while large increases or decreases in discharge can lead to large increases in invertebrate drift, the role of smaller, incremental changes in discharge is poorly described. In addition, while we might expect invertebrate drift concentrations to be proportional to benthic densities (# m−2), the benthic–drift relation has not been rigorously evaluated. 2. Here, we develop a framework for modelling invertebrate drift that is derived from sediment transport studies. We use this framework to guide the analysis of high-resolution data sets of benthic density and drift concentration for four important invertebrate taxa from the Colorado River downstream of Glen Canyon Dam (mean daily discharge 325 m3 s−1) that were collected over 18 months and include multiple observations within days. Ramping of regulated flows on this river segment provides an experimental treatment that is repeated daily and allowed us to describe the functional relations between invertebrate drift and two primary controls, discharge and benthic densities. 3. Twofold daily variation in discharge resulted in a >10-fold increase in drift concentrations of benthic invertebrates associated with pools and detritus (i.e. Gammarus lacustris and Potamopyrgus antipodarum). In contrast, drift concentrations of sessile blackfly larvae (Simuliium arcticum), which are associated with high-velocity cobble microhabitats, decreased by over 80% as discharge doubled. Drift concentrations of Chironomidae increased proportional to discharge. 4. Drift of all four taxa was positively related to benthic density. Drift concentrations of Gammarus, Potamopyrgus and Chironomidae were proportional to benthic density. Drift concentrations of Simulium were positively related to benthic density, but the benthic–drift relation was less than proportional (i.e. a doubling of benthic density only led to a 40% increase in drift concentrations). 5. Our study demonstrates that invertebrate drift concentrations in the Colorado River are jointly controlled by discharge and benthic densities, but these controls operate at different timescales. Twofold daily variation in discharge associated with hydropeaking was the primary control on within-day variation in invertebrate drift concentrations. In contrast, benthic density, which varied 10- to 1000-fold among sampling dates, depending on the taxa, was the primary control on invertebrate drift concentrations over longer timescales (weeks to months).
Passive seismic monitoring of the Bering Glacier during its last surge event
NASA Astrophysics Data System (ADS)
Zhan, Z.
2017-12-01
The physical causes behind glacier surges are still unclear. Numerous evidences suggest that they probably involve changes in glacier basal conditions, such as switch of basal water system from concentrated large tunnels to a distributed "layer" as "connected cavities". However, most remote sensing approaches can not penetrate to the base to monitor such changes continuously. Here we apply seismic interferometry using ambient noise to monitor glacier seismic structures, especially to detect possible signatures of the hypothesized high-pressure water "layer". As an example, we derive an 11-year long history of seismic structure of the Bering Glacier, Alaska, covering its latest surge event. We observe substantial drops of Rayleigh and Love wavespeeds across the glacier during the surge event, potentially caused by changes in crevasse density, glacier thickness, and basal conditions.
NASA Astrophysics Data System (ADS)
Molinie, Jack; Bernard, Marie-Lise; Komorowski, Jean-Christophe; Euphrasie-Clotilde, Lovely; Brute, France-Nor; Roussas, Andre
2014-05-01
On the 11 February 2010, fifteen minutes after midday, an explosive eruption of Soufriere Hills volcano sent tephra over the neighbour Caribbean islands. The volcanic ashes benefit from the vertical wind distribution of the moment to reach Guadeloupe island and cover it ground near 5 hours after the ash venting. Since the first ashes arrival over the town of Pointe-a-Pitre (located at 80 km at the South East of Soufriere hills volcano) to the end of the event, we measured the mean particle concentrations and particle size distributions every twenty minutes. Measurements were performed at a building roof of the town using an optical particles counter Lighthouse IAQ 3016 mainly used in indoor air quality studies and which provides up to 6 particle size channels of simultaneous counting with aerodynamic diameters classes ranging from 0.3 to >10 µm. The airborne particulate matter mass concentration, with equivalent aerodynamic diameters less than 10 µm (PM10) were measured by the local air quality network Gwad'air, in the vicinity of the site used to study this ash fall.. The maximum concentration of small particles with diameter lesser than 1µm (D0.3-1) was observed one hour before the larger particles. This result may imply a difference in shape and density between particles D0.3-1 and particles D1-10 (1
Multivariate Density Estimation and Remote Sensing
NASA Technical Reports Server (NTRS)
Scott, D. W.
1983-01-01
Current efforts to develop methods and computer algorithms to effectively represent multivariate data commonly encountered in remote sensing applications are described. While this may involve scatter diagrams, multivariate representations of nonparametric probability density estimates are emphasized. The density function provides a useful graphical tool for looking at data and a useful theoretical tool for classification. This approach is called a thunderstorm data analysis.
A Balanced Approach to Adaptive Probability Density Estimation.
Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy
2017-01-01
Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.
Adaptive detection of noise signal according to Neumann-Pearson criterion
NASA Astrophysics Data System (ADS)
Padiryakov, Y. A.
1985-03-01
Optimum detection according to the Neumann-Pearson criterion is considered in the case of a random Gaussian noise signal, stationary during measurement, and a stationary random Gaussian background interference. Detection is based on two samples, their statistics characterized by estimates of their spectral densities, it being a priori known that sample A from the signal channel is either the sum of signal and interference or interference alone and sample B from the reference interference channel is an interference with the same spectral density as that of the interference in sample A for both hypotheses. The probability of correct detection is maximized on the average, first in the 2N-dimensional space of signal spectral density and interference spectral density readings, by fixing the probability of false alarm at each point so as to stabilize it at a constant level against variation of the interference spectral density. Deterministic decision rules are established. The algorithm is then reduced to equivalent detection in the N-dimensional space of the ratio of sample A readings to sample B readings.
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.
2011-01-01
Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.
Lai, Jui-Yang; Wang, Pei-Ran; Luo, Li-Jyuan; Chen, Si-Tan
2014-01-01
To overcome the drawbacks associated with limited cross-linking efficiency of carbodiimide modified amniotic membrane, this study investigated the use of l-lysine as an additional amino acid bridge to enhance the stability of a nanofibrous tissue matrix for a limbal epithelial cell culture platform. Results of ninhydrin assays and zeta potential measurements showed that the amount of positively charged amino acid residues incorporated into the tissue collagen chains is highly correlated with the l-lysine-pretreated concentration. The cross-linked structure and hydrophilicity of amniotic membrane scaffolding materials affected by the lysine molecular bridging effects were determined. With an increase in the l-lysine-pretreated concentration from 1 to 30 mM, the cross-linking density was significantly increased and water content was markedly decreased. The variations in resistance to thermal denaturation and enzymatic degradation were in accordance with the number of cross-links per unit mass of amniotic membrane, indicating l-lysine-modulated stabilization of collagen molecules. It was also noteworthy that the carbodiimide cross-linked tissue samples prepared using a relatively high l-lysine-pretreated concentration (ie, 30 mM) appeared to have decreased light transmittance and biocompatibility, probably due to the influence of a large nanofiber size and a high charge density. The rise in stemness gene and protein expression levels was dependent on improved cross-link formation, suggesting the crucial role of amino acid bridges in constructing suitable scaffolds to preserve limbal progenitor cells. It is concluded that mild to moderate pretreatment conditions (ie, 3–10 mM l-lysine) can provide a useful strategy to assist in the development of carbodiimide cross-linked amniotic membrane as a stable stem cell niche for corneal epithelial tissue engineering. PMID:25395849
Scale size and life time of energy conversion regions observed by Cluster in the plasma sheet
NASA Astrophysics Data System (ADS)
Hamrin, M.; Norqvist, P.; Marghitu, O.; Vaivads, A.; Klecker, B.; Kistler, L. M.; Dandouras, I.
2009-11-01
In this article, and in a companion paper by Hamrin et al. (2009) [Occurrence and location of concentrated load and generator regions observed by Cluster in the plasma sheet], we investigate localized energy conversion regions (ECRs) in Earth's plasma sheet. From more than 80 Cluster plasma sheet crossings (660 h data) at the altitude of about 15-20 RE in the summer and fall of 2001, we have identified 116 Concentrated Load Regions (CLRs) and 35 Concentrated Generator Regions (CGRs). By examining variations in the power density, E·J, where E is the electric field and J is the current density obtained by Cluster, we have estimated typical values of the scale size and life time of the CLRs and the CGRs. We find that a majority of the observed ECRs are rather stationary in space, but varying in time. Assuming that the ECRs are cylindrically shaped and equal in size, we conclude that the typical scale size of the ECRs is 2 RE≲ΔSECR≲5 RE. The ECRs hence occupy a significant portion of the mid altitude plasma sheet. Moreover, the CLRs appear to be somewhat larger than the CGRs. The life time of the ECRs are of the order of 1-10 min, consistent with the large scale magnetotail MHD simulations of Birn and Hesse (2005). The life time of the CGRs is somewhat shorter than for the CLRs. On time scales of 1-10 min, we believe that ECRs rise and vanish in significant regions of the plasma sheet, possibly oscillating between load and generator character. It is probable that at least some of the observed ECRs oscillate energy back and forth in the plasma sheet instead of channeling it to the ionosphere.
Assessing environmental DNA detection in controlled lentic systems.
Moyer, Gregory R; Díaz-Ferguson, Edgardo; Hill, Jeffrey E; Shea, Colin
2014-01-01
Little consideration has been given to environmental DNA (eDNA) sampling strategies for rare species. The certainty of species detection relies on understanding false positive and false negative error rates. We used artificial ponds together with logistic regression models to assess the detection of African jewelfish eDNA at varying fish densities (0, 0.32, 1.75, and 5.25 fish/m3). Our objectives were to determine the most effective water stratum for eDNA detection, estimate true and false positive eDNA detection rates, and assess the number of water samples necessary to minimize the risk of false negatives. There were 28 eDNA detections in 324, 1-L, water samples collected from four experimental ponds. The best-approximating model indicated that the per-L-sample probability of eDNA detection was 4.86 times more likely for every 2.53 fish/m3 (1 SD) increase in fish density and 1.67 times less likely for every 1.02 C (1 SD) increase in water temperature. The best section of the water column to detect eDNA was the surface and to a lesser extent the bottom. Although no false positives were detected, the estimated likely number of false positives in samples from ponds that contained fish averaged 3.62. At high densities of African jewelfish, 3-5 L of water provided a >95% probability for the presence/absence of its eDNA. Conversely, at moderate and low densities, the number of water samples necessary to achieve a >95% probability of eDNA detection approximated 42-73 and >100 L, respectively. Potential biases associated with incomplete detection of eDNA could be alleviated via formal estimation of eDNA detection probabilities under an occupancy modeling framework; alternatively, the filtration of hundreds of liters of water may be required to achieve a high (e.g., 95%) level of certainty that African jewelfish eDNA will be detected at low densities (i.e., <0.32 fish/m3 or 1.75 g/m3).
NASA Astrophysics Data System (ADS)
Valageas, P.
2000-02-01
In this article we present an analytical calculation of the probability distribution of the magnification of distant sources due to weak gravitational lensing from non-linear scales. We use a realistic description of the non-linear density field, which has already been compared with numerical simulations of structure formation within hierarchical scenarios. Then, we can directly express the probability distribution P(mu ) of the magnification in terms of the probability distribution of the density contrast realized on non-linear scales (typical of galaxies) where the local slope of the initial linear power-spectrum is n=-2. We recover the behaviour seen by numerical simulations: P(mu ) peaks at a value slightly smaller than the mean < mu >=1 and it shows an extended large mu tail (as described in another article our predictions also show a good quantitative agreement with results from N-body simulations for a finite smoothing angle). Then, we study the effects of weak lensing on the derivation of the cosmological parameters from SNeIa. We show that the inaccuracy introduced by weak lensing is not negligible: {cal D}lta Omega_mega_m >~ 0.3 for two observations at z_s=0.5 and z_s=1. However, observations can unambiguously discriminate between Omega_mega_m =0.3 and Omega_mega_m =1. Moreover, in the case of a low-density universe one can clearly distinguish an open model from a flat cosmology (besides, the error decreases as the number of observ ed SNeIa increases). Since distant sources are more likely to be ``demagnified'' the most probable value of the observed density parameter Omega_mega_m is slightly smaller than its actual value. On the other hand, one may obtain some valuable information on the properties of the underlying non-linear density field from the measure of weak lensing distortions.
The relation of ground-water quality to housing density, Cape Cod, Massachusetts
Persky, J.H.
1986-01-01
Correlation of median nitrate concentration in groundwater with housing density for 18 sample areas on Cape Cod yields a Pearson correlation coefficient of 0.802, which is significant at the 95 % confidence level. In five of nine sample areas where housing density is greater than one unit/acre, nitrate concentrations exceed 5 mg of nitrate/L (the Barnstable County planning goal for nitrate) in 25% of wells. Nitrate concentrations exceed 5 mg of nitrogen/L in 25% of wells in only one of nine sample areas where housing density is less than one unit/acre. Median concentrations of sodium and iron, and median levels of pH and specific conductance, are not significantly correlated with housing density. A computer generated map of nitrate shows a positive relation between nitrate concentration and housing density on Cape Cod. However, the presence of septage- or sewage-disposal sites and fertilizer use are also important factors that affect the nitrate concentration. A map of specific conductance also shows a positive relation to housing density, but little or no relation between housing density and sodium, ammonia, pH, or iron is apparent on the maps. Chemical analyses of samples collected from 3,468 private- and public-supply wells between January 1980 and June 1984 were used to examine the extent to which housing density determines water quality on Cape Cod, an area largely unsewered and underlain by a sole source aquifer. (Author 's abstract)
Density of American black bears in New Mexico
Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.; Liley, Stewart
2018-01-01
Considering advances in noninvasive genetic sampling and spatially explicit capture–recapture (SECR) models, the New Mexico Department of Game and Fish sought to update their density estimates for American black bear (Ursus americanus) populations in New Mexico, USA, to aide in setting sustainable harvest limits. We estimated black bear density in the Sangre de Cristo, Sandia, and Sacramento Mountains, New Mexico, 2012–2014. We collected hair samples from black bears using hair traps and bear rubs and used a sex marker and a suite of microsatellite loci to individually genotype hair samples. We then estimated density in a SECR framework using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We sampled the populations using 554 hair traps and 117 bear rubs and collected 4,083 hair samples. We identified 725 (367 male, 358 female) individuals. Our density estimates varied from 16.5 bears/100 km2 (95% CI = 11.6–23.5) in the southern Sacramento Mountains to 25.7 bears/100 km2 (95% CI = 13.2–50.1) in the Sandia Mountains. Overall, detection probability at the activity center (g0) was low across all study areas and ranged from 0.00001 to 0.02. The low values of g0 were primarily a result of half of all hair samples for which genotypes were attempted failing to produce a complete genotype. We speculate that the low success we had genotyping hair samples was due to exceedingly high levels of ultraviolet (UV) radiation that degraded the DNA in the hair. Despite sampling difficulties, we were able to produce density estimates with levels of precision comparable to those estimated for black bears elsewhere in the United States.
NASA Astrophysics Data System (ADS)
Helble, Tyler Adam
Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. Automated methods are needed to aid in the analyses of the recorded data. When a mammal vocalizes in the marine environment, the received signal is a filtered version of the original waveform emitted by the marine mammal. The waveform is reduced in amplitude and distorted due to propagation effects that are influenced by the bathymetry and environment. It is important to account for these effects to determine a site-specific probability of detection for marine mammal calls in a given study area. A knowledge of that probability function over a range of environmental and ocean noise conditions allows vocalization statistics from recordings of single, fixed, omnidirectional sensors to be compared across sensors and at the same sensor over time with less bias and uncertainty in the results than direct comparison of the raw statistics. This dissertation focuses on both the development of new tools needed to automatically detect humpback whale vocalizations from single-fixed omnidirectional sensors as well as the determination of the site-specific probability of detection for monitoring sites off the coast of California. Using these tools, detected humpback calls are "calibrated" for environmental properties using the site-specific probability of detection values, and presented as call densities (calls per square kilometer per time). A two-year monitoring effort using these calibrated call densities reveals important biological and ecological information on migrating humpback whales off the coast of California. Call density trends are compared between the monitoring sites and at the same monitoring site over time. Call densities also are compared to several natural and human-influenced variables including season, time of day, lunar illumination, and ocean noise. The results reveal substantial differences in call densities between the two sites which were not noticeable using uncorrected (raw) call counts. Additionally, a Lombard effect was observed for humpback whale vocalizations in response to increasing ocean noise. The results presented in this thesis develop techniques to accurately measure marine mammal abundances from passive acoustic sensors.
Timescales of isotropic and anisotropic cluster collapse
NASA Astrophysics Data System (ADS)
Bartelmann, M.; Ehlers, J.; Schneider, P.
1993-12-01
From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.
McCarthy, Peter M.
2006-01-01
The Yellowstone River is very important in a variety of ways to the residents of southeastern Montana; however, it is especially vulnerable to spilled contaminants. In 2004, the U.S. Geological Survey, in cooperation with Montana Department of Environmental Quality, initiated a study to develop a computer program to rapidly estimate instream travel times and concentrations of a potential contaminant in the Yellowstone River using regression equations developed in 1999 by the U.S. Geological Survey. The purpose of this report is to describe these equations and their limitations, describe the development of a computer program to apply the equations to the Yellowstone River, and provide detailed instructions on how to use the program. This program is available online at [http://pubs.water.usgs.gov/sir2006-5057/includes/ytot.xls]. The regression equations provide estimates of instream travel times and concentrations in rivers where little or no contaminant-transport data are available. Equations were developed and presented for the most probable flow velocity and the maximum probable flow velocity. These velocity estimates can then be used to calculate instream travel times and concentrations of a potential contaminant. The computer program was developed so estimation equations for instream travel times and concentrations can be solved quickly for sites along the Yellowstone River between Corwin Springs and Sidney, Montana. The basic types of data needed to run the program are spill data, streamflow data, and data for locations of interest along the Yellowstone River. Data output from the program includes spill location, river mileage at specified locations, instantaneous discharge, mean-annual discharge, drainage area, and channel slope. Travel times and concentrations are provided for estimates of the most probable velocity of the peak concentration and the maximum probable velocity of the peak concentration. Verification of estimates of instream travel times and concentrations for the Yellowstone River requires information about the flow velocity throughout the 520 mi of river in the study area. Dye-tracer studies would provide the best data about flow velocities and would provide the best verification of instream travel times and concentrations estimated from this computer program; however, data from such studies does not currently (2006) exist and new studies would be expensive and time-consuming. An alternative approach used in this study for verification of instream travel times is based on the use of flood-wave velocities determined from recorded streamflow hydrographs at selected mainstem streamflow-gaging stations along the Yellowstone River. The ratios of flood-wave velocity to the most probable velocity for the base flow estimated from the computer program are within the accepted range of 2.5 to 4.0 and indicate that flow velocities estimated from the computer program are reasonable for the Yellowstone River. The ratios of flood-wave velocity to the maximum probable velocity are within a range of 1.9 to 2.8 and indicate that the maximum probable flow velocities estimated from the computer program, which corresponds to the shortest travel times and maximum probable concentrations, are conservative and reasonable for the Yellowstone River.
Zimmerman, Tammy M.
2006-01-01
The Lake Erie shoreline in Pennsylvania spans nearly 40 miles and is a valuable recreational resource for Erie County. Nearly 7 miles of the Lake Erie shoreline lies within Presque Isle State Park in Erie, Pa. Concentrations of Escherichia coli (E. coli) bacteria at permitted Presque Isle beaches occasionally exceed the single-sample bathing-water standard, resulting in unsafe swimming conditions and closure of the beaches. E. coli concentrations and other water-quality and environmental data collected at Presque Isle Beach 2 during the 2004 and 2005 recreational seasons were used to develop models using tobit regression analyses to predict E. coli concentrations. All variables statistically related to E. coli concentrations were included in the initial regression analyses, and after several iterations, only those explanatory variables that made the models significantly better at predicting E. coli concentrations were included in the final models. Regression models were developed using data from 2004, 2005, and the combined 2-year dataset. Variables in the 2004 model and the combined 2004-2005 model were log10 turbidity, rain weight, wave height (calculated), and wind direction. Variables in the 2005 model were log10 turbidity and wind direction. Explanatory variables not included in the final models were water temperature, streamflow, wind speed, and current speed; model results indicated these variables did not meet significance criteria at the 95-percent confidence level (probabilities were greater than 0.05). The predicted E. coli concentrations produced by the models were used to develop probabilities that concentrations would exceed the single-sample bathing-water standard for E. coli of 235 colonies per 100 milliliters. Analysis of the exceedence probabilities helped determine a threshold probability for each model, chosen such that the correct number of exceedences and nonexceedences was maximized and the number of false positives and false negatives was minimized. Future samples with computed exceedence probabilities higher than the selected threshold probability, as determined by the model, will likely exceed the E. coli standard and a beach advisory or closing may need to be issued; computed exceedence probabilities lower than the threshold probability will likely indicate the standard will not be exceeded. Additional data collected each year can be used to test and possibly improve the model. This study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to issue beach advisories or closings.
Probabilistic cluster labeling of imagery data
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1980-01-01
The problem of obtaining the probabilities of class labels for the clusters using spectral and spatial information from a given set of labeled patterns and their neighbors is considered. A relationship is developed between class and clusters conditional densities in terms of probabilities of class labels for the clusters. Expressions are presented for updating the a posteriori probabilities of the classes of a pixel using information from its local neighborhood. Fixed-point iteration schemes are developed for obtaining the optimal probabilities of class labels for the clusters. These schemes utilize spatial information and also the probabilities of label imperfections. Experimental results from the processing of remotely sensed multispectral scanner imagery data are presented.
NASA Astrophysics Data System (ADS)
Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo
2018-06-01
We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.
NASA Astrophysics Data System (ADS)
Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo
2018-01-01
We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.
Holley, Robert W.; Armour, Rosemary; Baldwin, Julia H.
1978-01-01
BSC-1 cells, epithelial cells of African green monkey kidney origin, show pronounced density-dependent regulation of growth in cell culture. Growth of the cells is rapid to a density of approximately 1.5 × 105 cells/per cm2 in Dulbecco-modified Eagle's medium supplemented with 10% calf serum. Above this “saturation density,” growth is much slower. It has been found that the glucose concentration in the culture medium is important in determining the “saturation density.” If the glucose concentration is increased 4-fold, the “saturation density” increases approximately 50%. Reduction of the “saturation density” of BSC-1 cells is also possible by decreasing the concentrations of low molecular weight nutrients in the culture medium. In medium supplemented with 0.1% calf serum, decreasing the concentrations of all of the organic constituents of the medium, from the high levels present in Dulbecco-modified Eagle's medium to concentrations near physiological levels, decreases the “saturation density” by approximately half. The decreased “saturation density” is not the result of lowering the concentration of any single nutrient but rather results from reduction of the concentrations of several nutrients. When the growth of BSC-1 cells is limited by low concentrations of all of the nutrients, some stimulation of growth results from increasing, separately, the concentrations of individual groups of nutrients, but the best growth stimulation is obtained by increasing the concentrations of all of the nutrients. The “wound healing” phenomenon, one manifestation of density-dependent regulation of growth in cell culture, is abolished by lowering the concentration of glutamine in the medium. Density-dependent regulation of growth of BSC-1 cells in cell culture thus appears to be a complex phenomenon that involves an interaction of nutrient concentrations with other regulatory factors. PMID:272650
Risk assessment for tephra dispersal and sedimentation: the example of four Icelandic volcanoes
NASA Astrophysics Data System (ADS)
Biass, Sebastien; Scaini, Chiara; Bonadonna, Costanza; Smith, Kate; Folch, Arnau; Höskuldsson, Armann; Galderisi, Adriana
2014-05-01
In order to assist the elaboration of proactive measures for the management of future Icelandic volcanic eruptions, we developed a new approach to assess the impact associated with tephra dispersal and sedimentation at various scales and for multiple sources. Target volcanoes are Hekla, Katla, Eyjafjallajökull and Askja, selected for their high probabilities of eruption and/or their high potential impact. We combined stratigraphic studies, probabilistic strategies and numerical modelling to develop comprehensive eruption scenarios and compile hazard maps for local ground deposition and regional atmospheric concentration using both TEPHRA2 and FALL3D models. New algorithms for the identification of comprehensive probability density functions of eruptive source parameters were developed for both short and long-lasting activity scenarios. A vulnerability assessment of socioeconomic and territorial aspects was also performed at both national and continental scales. The identification of relevant vulnerability indicators allowed for the identification of the most critical areas and territorial nodes. At a national scale, the vulnerability of economic activities and the accessibility to critical infrastructures was assessed. At a continental scale, we assessed the vulnerability of the main airline routes and airports. Resulting impact and risk were finally assessed by combining hazard and vulnerability analysis.
Young, R L; DelConte, A
1999-11-01
The aim of this 24-cycle study was to evaluate the effects on serum lipid concentrations of an oral contraceptive preparation containing 100 microg levonorgestrel and 20 microg ethinyl estradiol. Forty-two healthy women were enrolled in a study designed to evaluate the effects on serum lipid concentrations of an oral contraceptive containing 100 microg levonorgestrel and 20 microg ethinyl estradiol. Lipid data were evaluated for 28 women who completed 24 cycles of treatment with a preparation of 100 microg levonorgestrel with 20 microg ethinyl estradiol for 21 days followed by placebo for 7 days. Concentrations of triglycerides, total cholesterol, high-density lipoprotein cholesterol, high-density lipoprotein cholesterol subfractions 2 and 3, low-density lipoprotein cholesterol, and apolipoproteins A-I and B were analyzed. Mean percentage changes from baseline were tested for significance by means of paired Student t tests. Total cholesterol, high-density lipoprotein cholesterol, high-density lipoprotein subfraction 2, and apolipoprotein A-I concentrations were not significantly changed from baseline. Neither was the ratio of high-density lipoprotein subfraction 2 to high-density lipoprotein subfraction 3. Mean percentage increases in concentrations of triglyceride, high-density lipoprotein subfraction 3, apolipoprotein B, and low-density lipoprotein cholesterol and increases in the ratios of total cholesterol to high-density lipoprotein cholesterol, low-density lipoprotein cholesterol to high-density lipoprotein cholesterol, and apolipoprotein B to apolipoprotein A-I were significant (P <.05) at >/=1 cycle. By cycle 24, however, only the concentration of high-density lipoprotein subfraction 3 remained significantly elevated. Changes in the plasma lipid profiles among women receiving monophasic 100 microg levonorgestrel with 20 microg ethinyl estradiol were similar to those seen with other low-dose oral contraceptives, but by cycle 24 only 1 of 7 mean values remained significantly different from baseline.
The Statistical Value of Raw Fluorescence Signal in Luminex xMAP Based Multiplex Immunoassays
Breen, Edmond J.; Tan, Woei; Khan, Alamgir
2016-01-01
Tissue samples (plasma, saliva, serum or urine) from 169 patients classified as either normal or having one of seven possible diseases are analysed across three 96-well plates for the presences of 37 analytes using cytokine inflammation multiplexed immunoassay panels. Censoring for concentration data caused problems for analysis of the low abundant analytes. Using fluorescence analysis over concentration based analysis allowed analysis of these low abundant analytes. Mixed-effects analysis on the resulting fluorescence and concentration responses reveals a combination of censoring and mapping the fluorescence responses to concentration values, through a 5PL curve, changed observed analyte concentrations. Simulation verifies this, by showing a dependence on the mean florescence response and its distribution on the observed analyte concentration levels. Differences from normality, in the fluorescence responses, can lead to differences in concentration estimates and unreliable probabilities for treatment effects. It is seen that when fluorescence responses are normally distributed, probabilities of treatment effects for fluorescence based t-tests has greater statistical power than the same probabilities from concentration based t-tests. We add evidence that the fluorescence response, unlike concentration values, doesn’t require censoring and we show with respect to differential analysis on the fluorescence responses that background correction is not required. PMID:27243383
Wang, Mei; Gao, Bin; Tang, Deshan; Yu, Congrong
2018-04-01
Simultaneous aggregation and retention of nanoparticles can occur during their transport in porous media. In this work, the concurrent aggregation and transport of GO in saturated porous media were investigated under the conditions of different combinations of temperature, cation type (valence), and electrolyte concentration. Increasing temperature (6-24 °C) at a relatively high electrolyte concentration (i.e., 50 mM for Na + , 1 mM for Ca 2+ , 1.75 mM for Mg 2+ , and 0.03 and 0.05 mM for Al 3+ ) resulted in enhanced GO retention in the porous media. For instance, when the temperature increased from 6 to 24 °C, GO recovery rate decreased from 31.08% to 6.53% for 0.03 mM Al 3+ and from 27.11% to 0 for 0.05 mM Al 3+ . At the same temperature, increasing cation valence and electrolyte concentration also promoted GO retention. Although GO aggregation occurred in the electrolytes during the transport, the deposition mechanisms of GO retention in the media depended on cation type (valence). For 50 mM Na + , surface deposition via secondary minima was the dominant GO retention mechanism. For multivalent cation electrolytes, GO aggregation was rapid and thus other mechanisms such as physical straining and sedimentation also played important roles in controlling GO retention in the media. After passing through the columns, the GO particles in the effluents showed better stability with lower initial aggregation rates. This was probably because less stable GO particles with lower surface charge densities in the porewater were filtered by the porous media, resulting in more stable GO particle with higher surface charge densities in the effluents. An advection-dispersion-reaction model was applied to simulate GO breakthrough curves and the simulations matched all the experimental data well. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simulation of gross and net erosion of high-Z materials in the DIII-D divertor
Wampler, William R.; Ding, R.; Stangeby, P. C.; ...
2015-12-17
The three-dimensional Monte Carlo code ERO has been used to simulate dedicated DIII-D experiments in which Mo and W samples with different sizes were exposed to controlled and well-diagnosed divertor plasma conditions to measure the gross and net erosion rates. Experimentally, the net erosion rate is significantly reduced due to the high local redeposition probability of eroded high-Z materials, which according to the modelling is mainly controlled by the electric field and plasma density within the Chodura sheath. Similar redeposition ratios were obtained from ERO modelling with three different sheath models for small angles between the magnetic field and themore » material surface, mainly because of their similar mean ionization lengths. The modelled redeposition ratios are close to the measured value. Decreasing the potential drop across the sheath can suppress both gross and net erosion because sputtering yield is decreased due to lower incident energy while the redeposition ratio is not reduced owing to the higher electron density in the Chodura sheath. Taking into account material mixing in the ERO surface model, the net erosion rate of high-Z materials is shown to be strongly dependent on the carbon impurity concentration in the background plasma; higher carbon concentration can suppress net erosion. As a result, the principal experimental results such as net erosion rate and profile and redeposition ratio are well reproduced by the ERO simulations.« less
Striped bass stocks and concentrations of polychlorinated biphenyls
Fabrizio, Mary C.; Sloan, Ronald J.; O'Brien, John F.
1991-01-01
Harvest restrictions on striped bass Morone saxatilis fisheries in Atlantic coastal states were relaxed in 1990, but consistent, coastwide regulations of the harvest have been difficult to implement because of the mixed-stock nature of the fisheries and the recognized contamination of Hudson River fish by polychlorinated biphenyls (PCBs). We examined PCB concentrations and stock of origin of coastal striped bass to better understand the effects of these two factors on the composition of the harvest. The probability of observing differences in PCB concentration among fish from the Hudson River stock and the 'southern' group (Chesapeake Bay and Roanoke River stocks combined) was investigated with the logit model (a linear model for analysis of categorical data). Although total PCB concentrations were highly variable among fish from the two groups, striped bass classified as Hudson River stock had a significantly greater probability of having PCB concentrations equal to or greater than 2.00 mg/kg than did fish belonging to the southern group for all age- and size-classes examined. There was a significantly greater probability of observing total PCB concentrations equal to or exceeding 2.00 mg/kg in fish that were 5, 6, and 7 or more years old, and this probability increased linearly with age. We observed similar results when we examined the effect of size on total PCB concentration. The minimum-size limit estimated to permit escapement of fish to sustain stock production is 610 mm total length. Unless total PCB concentrations decrease in striped bass, it is likely that many harvestable fish will have concentrations that exceed the tolerance limit set by the U.S. Food and Drug Administration.
Anastasiadis, Anastasios; Onal, Bulent; Modi, Pranjal; Turna, Burak; Duvdevani, Mordechai; Timoney, Anthony; Wolf, J Stuart; De La Rosette, Jean
2013-12-01
This study aimed to explore the relationship between stone density and outcomes of percutaneous nephrolithotomy (PCNL) using the Clinical Research Office of the Endourological Society (CROES) PCNL Global Study database. Patients undergoing PCNL treatment were assigned to a low stone density [LSD, ≤ 1000 Hounsfield units (HU)] or high stone density (HSD, > 1000 HU) group based on the radiological density of the primary renal stone. Preoperative characteristics and outcomes were compared in the two groups. Retreatment for residual stones was more frequent in the LSD group. The overall stone-free rate achieved was higher in the HSD group (79.3% vs 74.8%, p = 0.113). By univariate regression analysis, the probability of achieving a stone-free outcome peaked at approximately 1250 HU. Below or above this density resulted in lower treatment success, particularly at very low HU values. With increasing radiological stone density, operating time decreased to a minimum at approximately 1000 HU, then increased with further increase in stone density. Multivariate non-linear regression analysis showed a similar relationship between the probability of a stone-free outcome and stone density. Higher treatment success rates were found with low stone burden, pelvic stone location and use of pneumatic lithotripsy. Very low and high stone densities are associated with lower rates of treatment success and longer operating time in PCNL. Preoperative assessment of stone density may help in the selection of treatment modality for patients with renal stones.
The propagator of stochastic electrodynamics
NASA Astrophysics Data System (ADS)
Cavalleri, G.
1981-01-01
The "elementary propagator" for the position of a free charged particle subject to the zero-point electromagnetic field with Lorentz-invariant spectral density ~ω3 is obtained. The nonstationary process for the position is solved by the stationary process for the acceleration. The dispersion of the position elementary propagator is compared with that of quantum electrodynamics. Finally, the evolution of the probability density is obtained starting from an initial distribution confined in a small volume and with a Gaussian distribution in the velocities. The resulting probability density for the position turns out to be equal, to within radiative corrections, to ψψ* where ψ is the Kennard wave packet. If the radiative corrections are retained, the present result is new since the corresponding expression in quantum electrodynamics has not yet been found. Besides preceding quantum electrodynamics for this problem, no renormalization is required in stochastic electrodynamics.
Dual Approach To Superquantile Estimation And Applications To Density Fitting
2016-06-01
incorporate additional constraints to improve the fidelity of density estimates in tail regions. We limit our investigation to data with heavy tails, where...samples of various heavy -tailed distributions. 14. SUBJECT TERMS probability density estimation, epi-splines, optimization, risk quantification...limit our investigation to data with heavy tails, where risk quantification is typically the most difficult. Demonstrations are provided in the form of
Mode switching in volcanic seismicity: El Hierro 2011-2013
NASA Astrophysics Data System (ADS)
Roberts, Nick S.; Bell, Andrew F.; Main, Ian G.
2016-05-01
The Gutenberg-Richter b value is commonly used in volcanic eruption forecasting to infer material or mechanical properties from earthquake distributions. Such studies typically analyze discrete time windows or phases, but the choice of such windows is subjective and can introduce significant bias. Here we minimize this sample bias by iteratively sampling catalogs with randomly chosen windows and then stack the resulting probability density functions for the estimated b>˜ value to determine a net probability density function. We examine data from the El Hierro seismic catalog during a period of unrest in 2011-2013 and demonstrate clear multimodal behavior. Individual modes are relatively stable in time, but the most probable b>˜ value intermittently switches between modes, one of which is similar to that of tectonic seismicity. Multimodality is primarily associated with intermittent activation and cessation of activity in different parts of the volcanic system rather than with respect to any systematic inferred underlying process.
NASA Astrophysics Data System (ADS)
Dufty, J. W.
1984-09-01
Diffusion of a tagged particle in a fluid with uniform shear flow is described. The continuity equation for the probability density describing the position of the tagged particle is considered. The diffusion tensor is identified by expanding the irreversible part of the probability current to first order in the gradient of the probability density, but with no restriction on the shear rate. The tensor is expressed as the time integral of a nonequilibrium autocorrelation function for the velocity of the tagged particle in its local fluid rest frame, generalizing the Green-Kubo expression to the nonequilibrium state. The tensor is evaluated from results obtained previously for the velocity autocorrelation function that are exact for Maxwell molecules in the Boltzmann limit. The effects of viscous heating are included and the dependence on frequency and shear rate is displayed explicitly. The mode-coupling contributions to the frequency and shear-rate dependent diffusion tensor are calculated.
Nonequilibrium dynamics of a pure dry friction model subjected to colored noise
NASA Astrophysics Data System (ADS)
Geffert, Paul M.; Just, Wolfram
2017-06-01
We investigate the impact of noise on a two-dimensional simple paradigmatic piecewise-smooth dynamical system. For that purpose, we consider the motion of a particle subjected to dry friction and colored noise. The finite correlation time of the noise provides an additional dimension in phase space, causes a nontrivial probability current, and establishes a proper nonequilibrium regime. Furthermore, the setup allows for the study of stick-slip phenomena, which show up as a singular component in the stationary probability density. Analytic insight can be provided by application of the unified colored noise approximation, developed by Jung and Hänggi [Phys. Rev. A 35, 4464(R) (1987), 10.1103/PhysRevA.35.4464]. The analysis of probability currents and of power spectral densities underpins the observed stick-slip transition, which is related with a critical value of the noise correlation time.
Nonequilibrium dynamics of a pure dry friction model subjected to colored noise.
Geffert, Paul M; Just, Wolfram
2017-06-01
We investigate the impact of noise on a two-dimensional simple paradigmatic piecewise-smooth dynamical system. For that purpose, we consider the motion of a particle subjected to dry friction and colored noise. The finite correlation time of the noise provides an additional dimension in phase space, causes a nontrivial probability current, and establishes a proper nonequilibrium regime. Furthermore, the setup allows for the study of stick-slip phenomena, which show up as a singular component in the stationary probability density. Analytic insight can be provided by application of the unified colored noise approximation, developed by Jung and Hänggi [Phys. Rev. A 35, 4464(R) (1987)0556-279110.1103/PhysRevA.35.4464]. The analysis of probability currents and of power spectral densities underpins the observed stick-slip transition, which is related with a critical value of the noise correlation time.
Multipartite entanglement characterization of a quantum phase transition
NASA Astrophysics Data System (ADS)
Costantini, G.; Facchi, P.; Florio, G.; Pascazio, S.
2007-07-01
A probability density characterization of multipartite entanglement is tested on the one-dimensional quantum Ising model in a transverse field. The average and second moment of the probability distribution are numerically shown to be good indicators of the quantum phase transition. We comment on multipartite entanglement generation at a quantum phase transition.
ERIC Educational Resources Information Center
Nair, Vishnu K. K.; Biedermann, Britta; Nickels, Lyndsey
2017-01-01
Purpose: Previous research has shown that the language-learning mechanism is affected by bilingualism resulting in a novel word learning advantage for bilingual speakers. However, less is known about the factors that might influence this advantage. This article reports an investigation of 2 factors: phonotactic probability and phonological…
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
Factoring uncertainty into restoration modeling of in-situ leach uranium mines
Johnson, Raymond H.; Friedel, Michael J.
2009-01-01
Postmining restoration is one of the greatest concerns for uranium in-situ leach (ISL) mining operations. The ISL-affected aquifer needs to be returned to conditions specified in the mining permit (either premining or other specified conditions). When uranium ISL operations are completed, postmining restoration is usually achieved by injecting reducing agents into the mined zone. The objective of this process is to restore the aquifer to premining conditions by reducing the solubility of uranium and other metals in the ground water. Reactive transport modeling is a potentially useful method for simulating the effectiveness of proposed restoration techniques. While reactive transport models can be useful, they are a simplification of reality that introduces uncertainty through the model conceptualization, parameterization, and calibration processes. For this reason, quantifying the uncertainty in simulated temporal and spatial hydrogeochemistry is important for postremedial risk evaluation of metal concentrations and mobility. Quantifying the range of uncertainty in key predictions (such as uranium concentrations at a specific location) can be achieved using forward Monte Carlo or other inverse modeling techniques (trial-and-error parameter sensitivity, calibration constrained Monte Carlo). These techniques provide simulated values of metal concentrations at specified locations that can be presented as nonlinear uncertainty limits or probability density functions. Decisionmakers can use these results to better evaluate environmental risk as future metal concentrations with a limited range of possibilities, based on a scientific evaluation of uncertainty.
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2010 CFR
2010-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2012 CFR
2012-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2014 CFR
2014-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2013 CFR
2013-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
27 CFR 24.180 - Use of concentrated and unconcentrated fruit juice.
Code of Federal Regulations, 2011 CFR
2011-04-01
... density, or to 22 degrees Brix, or to any degree of Brix between its original density and 22 degrees Brix... between its original density and 22 degrees Brix. The proprietor, prior to using concentrated fruit juice...
Systematic Onset of Periodic Patterns in Random Disk Packings
NASA Astrophysics Data System (ADS)
Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A. C.
2018-04-01
We report evidence of a surprising systematic onset of periodic patterns in very tall piles of disks deposited randomly between rigid walls. Independently of the pile width, periodic structures are always observed in monodisperse deposits containing up to 1 07 disks. The probability density function of the lengths of disordered transient phases that precede the onset of periodicity displays an approximately exponential tail. These disordered transients may become very large when the channel width grows without bound. For narrow channels, the probability density of finding periodic patterns of a given period displays a series of discrete peaks, which, however, are washed out completely when the channel width grows.
NASA Technical Reports Server (NTRS)
Wood, B. J.; Ablow, C. M.; Wise, H.
1973-01-01
For a number of candidate materials of construction for the dual air density explorer satellites the rate of oxygen atom loss by adsorption, surface reaction, and recombination was determined as a function of surface and temperature. Plain aluminum and anodized aluminum surfaces exhibit a collisional atom loss probability alpha .01 in the temperature range 140 - 360 K, and an initial sticking probability. For SiO coated aluminum in the same temperature range, alpha .001 and So .001. Atom-loss on gold is relatively rapid alpha .01. The So for gold varies between 0.25 and unity in the temperature range 360 - 140 K.
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2011-07-01
We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.
NASA Technical Reports Server (NTRS)
Wilson, Lonnie A.
1987-01-01
Bragg-cell receivers are employed in specialized Electronic Warfare (EW) applications for the measurement of frequency. Bragg-cell receiver characteristics are fully characterized for simple RF emitter signals. This receiver is early in its development cycle when compared to the IFM receiver. Functional mathematical models are derived and presented in this report for the Bragg-cell receiver. Theoretical analysis is presented and digital computer signal processing results are presented for the Bragg-cell receiver. Probability density function analysis are performed for output frequency. Probability density function distributions are observed to depart from assumed distributions for wideband and complex RF signals. This analysis is significant for high resolution and fine grain EW Bragg-cell receiver systems.
An analytical approach to gravitational lensing by an ensemble of axisymmetric lenses
NASA Technical Reports Server (NTRS)
Lee, Man Hoi; Spergel, David N.
1990-01-01
The problem of gravitational lensing by an ensemble of identical axisymmetric lenses randomly distributed on a single lens plane is considered and a formal expression is derived for the joint probability density of finding shear and convergence at a random point on the plane. The amplification probability for a source can be accurately estimated from the distribution in shear and convergence. This method is applied to two cases: lensing by an ensemble of point masses and by an ensemble of objects with Gaussian surface mass density. There is no convergence for point masses whereas shear is negligible for wide Gaussian lenses.
NASA Astrophysics Data System (ADS)
Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.
2018-05-01
As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-04-04
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.
Lipid transfer particle catalyzes transfer of carotenoids between lipophorins of Bombyx mori.
Tsuchida, K; Arai, M; Tanaka, Y; Ishihara, R; Ryan, R O; Maekawa, H
1998-12-01
The yellow color of Bombyx mori hemolymph is due to the presence of carotenoids, which are primarily associated with lipophorin particles. Carotenoids were extracted from high density lipophorin (HDLp) of B. mori and analyzed by HPLC. HDLp contained 33 micrograms of carotenoids per mg protein. Over 90% of carotenoids were lutein while alpha-carotene and beta-carotene were minor components. When larval hemolymph was subjected to density gradient ultracentrifugation, a second minor yellow band was present, which was identified as B. mori lipid transfer particle (LTP). During other life stages examined however, this second band was not visible. To determine if coloration of LTP may fluctuate during development, we determined its concentration in hemolymph and compared it to that of lipophorin. Both proteins were present during all life stages and their concentrations gradually increased. The ratio of lipophorin: LTP was 10-15:1 during the fourth and fifth instar larval stages, and 20-30:1 during the pupal and adult stages. Thus, there was no correlation between the yellow color attributed to LTP and its hemolymph concentration. It is possible that yellow coloration of the LTP fraction corresponds to developmental stages when the particle is active in carotene transport. To determine if LTP is capable of facilitating carotene transfer, we took advantage of a white hemolymph B. mori strain which, when fed artificial diet containing a low carotene content, gives rise to a lipophorin that is nearly colorless. A spectrophotometric, carotene specific, transfer assay was developed which employed wild type, carotene-rich HDLp as donor particle and colorless low density lipophorin, derived from the white hemolymph strain animals, as acceptor particle. In incubations lacking LTP carotenes remained associated with HDLp while inclusion of LTP induced a redistribution of carotenes between the donor and acceptor in a time and concentration dependent manner. Time course studies suggested the rate of LTP-mediated carotene transfer was relatively slow, requiring up to 4 h to reach equilibrium. By contrast, studies employing 3H-diacylglycerol labeled HDLp as donor particle in lipid transfer assays revealed a rapid equilibration of label between the particles. Thus, it is plausible that the slower rate of LTP-mediated carotene transfer is due to its probable sequestration in the core of HDLp.
USING THE HERMITE POLYNOMIALS IN RADIOLOGICAL MONITORING NETWORKS.
Benito, G; Sáez, J C; Blázquez, J B; Quiñones, J
2018-03-15
The most interesting events in Radiological Monitoring Network correspond to higher values of H*(10). The higher doses cause skewness in the probability density function (PDF) of the records, which there are not Gaussian anymore. Within this work the probability of having a dose >2 standard deviations is proposed as surveillance of higher doses. Such probability is estimated by using the Hermite polynomials for reconstructing the PDF. The result is that the probability is ~6 ± 1%, much >2.5% corresponding to Gaussian PDFs, which may be of interest in the design of alarm level for higher doses.
Impacts of savanna trees on forage quality for a large African herbivore
De Kroon, Hans; Prins, Herbert H. T.
2008-01-01
Recently, cover of large trees in African savannas has rapidly declined due to elephant pressure, frequent fires and charcoal production. The reduction in large trees could have consequences for large herbivores through a change in forage quality. In Tarangire National Park, in Northern Tanzania, we studied the impact of large savanna trees on forage quality for wildebeest by collecting samples of dominant grass species in open grassland and under and around large Acacia tortilis trees. Grasses growing under trees had a much higher forage quality than grasses from the open field indicated by a more favourable leaf/stem ratio and higher protein and lower fibre concentrations. Analysing the grass leaf data with a linear programming model indicated that large savanna trees could be essential for the survival of wildebeest, the dominant herbivore in Tarangire. Due to the high fibre content and low nutrient and protein concentrations of grasses from the open field, maximum fibre intake is reached before nutrient requirements are satisfied. All requirements can only be satisfied by combining forage from open grassland with either forage from under or around tree canopies. Forage quality was also higher around dead trees than in the open field. So forage quality does not reduce immediately after trees die which explains why negative effects of reduced tree numbers probably go initially unnoticed. In conclusion our results suggest that continued destruction of large trees could affect future numbers of large herbivores in African savannas and better protection of large trees is probably necessary to sustain high animal densities in these ecosystems. PMID:18309522
On the presence of prostatic secretion protein in rat seminal fluid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borgstroem, E.; Pousette, A.; Bjoerk, P.
1981-01-01
The copulating plug collected from the tip of the penis from rats immediately after decapitation contains a protein very similar and probably identical to PSP (prostatic secretion protein); this protein has earlier been purified from rat prostatic cytosol and characterized. The protein present in the copulating plug interacts with (3H)estramustine and binds to the antibody raised against rat PSP. The concentration of the protein in the copulating plug is 400 ng/mg of total protein, when measured using the radioimmunoassay technique developed earlier for measurement of PSP in rat prostate. The (3H)estramustine-protein complex formed in a preparation of the copulating plugmore » has an apparent molecular weight of about 50,000 and a sedimentation coefficient of about 3S when analyzed using sucrose density gradient centrifugation. The complex was retained on Concanavalin-A Sepharose indicating that the protein is a glycoprotein. Binding of the complex was also observed on hydroxylapatite and DEAE-Sephadex columns, from which it was eluted at 0.18 M KCl. Light microscope autoradiograms of rat sperms incubated with 125I-labeled PSP indicated that PSP is bound to all parts of the sperms. A macromolecule interacting with the PSP-antibodies is also present in human seminal fluid but at a concentration considerably lower than in rat seminal fluid. The present study shows that a macromolecule probably identical to prostatic secretion protein is present in the copulating plug from the rat. The biological role of this protein in normal male fertility is discussed.« less
NASA Astrophysics Data System (ADS)
Mit'kin, A. S.; Pogorelov, V. A.; Chub, E. G.
2015-08-01
We consider the method of constructing the suboptimal filter on the basis of approximating the a posteriori probability density of the multidimensional Markov process by the Pearson distributions. The proposed method can efficiently be used for approximating asymmetric, excessive, and finite densities.
Ebner, T; Shebl, O; Moser, M; Mayer, R B; Arzt, W; Tews, G
2011-01-01
Sperm DNA fragmentation is increased in poor-quality semen samples and correlates with failed fertilization, impaired preimplantation development and reduced pregnancy outcome. Common sperm preparation techniques may reduce the percentage of strandbreak-positive spermatozoa, but, to date, there is no reliable approach to exclusively accumulate strandbreak-free spermatozoa. To analyse the efficiency of special sperm selection chambers (Zech-selectors made of glass or polyethylene) in terms of strandbreak reduction, 39 subfertile men were recruited and three probes (native, density gradient and Zech-selector) were used to check for strand breaks using the sperm chromatin dispersion test. The mean percentage of affected spermatozoa in the ejaculate was 15.8 ± 7.8% (range 5.0–42.1%). Density gradient did not significantly improve the quality of spermatozoa selected(14.2 ± 7.0%). However, glass chambers completely removed 90% spermatozoa showing strand breaks and polyethylene chambers removed 76%. Both types of Zech-selectors were equivalent in their efficiency, significantly reduced DNA damage (P < 0.001) and,with respect to this, performed better than density gradient centrifugation (P < 0.001). As far as is known, this is the first report ona sperm preparation technique concentrating spermatozoa unaffected in terms of DNA damage. The special chambers most probably select for sperm motility and/or maturity. Copyright © 2010 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Doramectin concentration in the serum of pastured cattle treated repeatedly at 28 d intervals at two dosage rates was used to predict the probability that cattle fever ticks could successfully feed to repletion during the interval between treatments. At ~270 µg/kg, the doramectin concentration dropp...
Observations of the eruptions of July 22 and August 7, 1980, at Mount St. Helens, Washington
Hoblitt, Richard P.
1986-01-01
The explosive eruptions of July 22 and August 7, 1980, at Mount St. Helens, Wash., both included multiple eruptive pulses. The beginnings of three of the pulses-two on July 22 and one on August 7-were witnessed and photographed. Each of these three began with a fountain of gases and pyroclasts that collapsed around the vent and generated a pyroclastic density flow. Significant vertical-eruption columns developed only after the density flows were generated. This behavior is attributable to either an increase in the gas content of the eruption jet or a decrease in vent radius with time. An increase in the gas content may have occurred as the vent was cleared (by expulsion of a plug of pyroclasts) or as the eruption began to tap deeper, gas-rich magma after first expelling the upper, gas-depleted part of the magma body. An effective decrease of the vent radius with time may have occurred as the eruption originated from progressively deeper levels in the vent. All of these processes-vent clearing; tapping of deeper, gas-rich magma; and effective decrease in vent radius-probably operated to some extent. A 'relief-valve' mechanism is proposed here to account for the occurrence of multiple eruptive pulses. This mechanism requires that the conduit above the magma body be filled with a bed of pyroclasts, and that the vesiculation rate in the magma body be inadequate to sustain continuous eruption. During a repose interval, vesiculation of the magma body would cause gas to flow upward through the bed of pyroclasts. If the rate at which the magma produced gas exceeded the rate at which gas escaped to the atmosphere, the vertical pressure difference across the bed of pyroclastic debris would increase, as would the gas-flow rate. Eventually a gas-flow rate would be achieved that would suddenly diminish the ability of the bed to maintain a pressure difference between the magma body and the atmosphere. The bed of pyroclasts would then be expelled (that is, the relief valve would open) and an eruption would commence. During the eruption, gas would be lost faster than it could be replaced by vesiculation, so the gas-flow rate in the conduit would decrease. Eventually the gas-flow rate would decrease to a value that would be inadequate to expel pyroclasts, so the conduit would again become choked with pyroclasts (that is, the relief valve would close). Another period of repose would commence. The eruption/repose sequence would be repeated until gas-production rates were inadequate to reopen the valve, either because the depth of the pyroclast bed had become too great, the volatile content of the magma had become too low, or the magma had been expended. A timed sequence of photographs of a pyroclastic density flow on August 7 indicates that, in general, the velocity of the flow front was determined by the underlying topography. Observations and details of the velocity/topography relationship suggest that both pyroclastic flows and pyroclastic surges formed. The following mechanism is consistent with the data. During initial fountain collapse and when the flow passed over steep, irregular terrain, a highly inflated suspension of gases and pyroclasts formed. In this suspension, the pyroclasts underwent rapid differential settling according to size and density; a relatively low-concentration, fine-grained upper phase formed over a relatively high-concentration coarse-grained phase. The low-particle-concentration phase (the pyroclastic surge) was subject to lower internal friction than the basal high-concentration phase (the pyroclastic flow), and so accelerated away from it. The surge advanced until it had deposited so much of its solid fraction that its net density became less than that of the ambient air. At this point it rose convectively off the ground, quickly decelerated, and was overtaken by the pyroclastic flow. The behavior of the flow of August 7 suggests that a pyroclastic density flow probably expands through the ingestion of ai
Improving experimental phases for strong reflections prior to density modification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Improving experimental phases for strong reflections prior to density modification
Uervirojnangkoorn, Monarin; Hilgenfeld, Rolf; Terwilliger, Thomas C.; ...
2013-09-20
Experimental phasing of diffraction data from macromolecular crystals involves deriving phase probability distributions. These distributions are often bimodal, making their weighted average, the centroid phase, improbable, so that electron-density maps computed using centroid phases are often non-interpretable. Density modification brings in information about the characteristics of electron density in protein crystals. In successful cases, this allows a choice between the modes in the phase probability distributions, and the maps can cross the borderline between non-interpretable and interpretable. Based on the suggestions by Vekhter [Vekhter (2005), Acta Cryst. D 61, 899–902], the impact of identifying optimized phases for a small numbermore » of strong reflections prior to the density-modification process was investigated while using the centroid phase as a starting point for the remaining reflections. A genetic algorithm was developed that optimizes the quality of such phases using the skewness of the density map as a target function. Phases optimized in this way are then used in density modification. In most of the tests, the resulting maps were of higher quality than maps generated from the original centroid phases. In one of the test cases, the new method sufficiently improved a marginal set of experimental SAD phases to enable successful map interpretation. Lastly, a computer program, SISA, has been developed to apply this method for phase improvement in macromolecular crystallography.« less
Global warming precipitation accumulation increases above the current-climate cutoff scale
Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.
2017-01-01
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff. PMID:28115693
Global warming precipitation accumulation increases above the current-climate cutoff scale
NASA Astrophysics Data System (ADS)
Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; Bernstein, Diana N.
2017-02-01
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.
Global warming precipitation accumulation increases above the current-climate cutoff scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less
Global warming precipitation accumulation increases above the current-climate cutoff scale.
Neelin, J David; Sahany, Sandeep; Stechmann, Samuel N; Bernstein, Diana N
2017-02-07
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing with event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.
Global warming precipitation accumulation increases above the current-climate cutoff scale
Neelin, J. David; Sahany, Sandeep; Stechmann, Samuel N.; ...
2017-01-23
Precipitation accumulations, integrated over rainfall events, can be affected by both intensity and duration of the storm event. Thus, although precipitation intensity is widely projected to increase under global warming, a clear framework for predicting accumulation changes has been lacking, despite the importance of accumulations for societal impacts. Theory for changes in the probability density function (pdf) of precipitation accumulations is presented with an evaluation of these changes in global climate model simulations. We show that a simple set of conditions implies roughly exponential increases in the frequency of the very largest accumulations above a physical cutoff scale, increasing withmore » event size. The pdf exhibits an approximately power-law range where probability density drops slowly with each order of magnitude size increase, up to a cutoff at large accumulations that limits the largest events experienced in current climate. The theory predicts that the cutoff scale, controlled by the interplay of moisture convergence variance and precipitation loss, tends to increase under global warming. Thus, precisely the large accumulations above the cutoff that are currently rare will exhibit increases in the warmer climate as this cutoff is extended. This indeed occurs in the full climate model, with a 3 °C end-of-century global-average warming yielding regional increases of hundreds of percent to >1,000% in the probability density of the largest accumulations that have historical precedents. The probabilities of unprecedented accumulations are also consistent with the extension of the cutoff.« less
Li, Ye; Yu, Lin; Zhang, Yixin
2017-05-29
Applying the angular spectrum theory, we derive the expression of a new Hermite-Gaussian (HG) vortex beam. Based on the new Hermite-Gaussian (HG) vortex beam, we establish the model of the received probability density of orbital angular momentum (OAM) modes of this beam propagating through a turbulent ocean of anisotropy. By numerical simulation, we investigate the influence of oceanic turbulence and beam parameters on the received probability density of signal OAM modes and crosstalk OAM modes of the HG vortex beam. The results show that the influence of oceanic turbulence of anisotropy on the received probability of signal OAM modes is smaller than isotropic oceanic turbulence under the same condition, and the effect of salinity fluctuation on the received probability of the signal OAM modes is larger than the effect of temperature fluctuation. In the strong dissipation of kinetic energy per unit mass of fluid and the weak dissipation rate of temperature variance, we can decrease the effects of turbulence on the received probability of signal OAM modes by selecting a long wavelength and a larger transverse size of the HG vortex beam in the source's plane. In long distance propagation, the HG vortex beam is superior to the Laguerre-Gaussian beam for resisting the destruction of oceanic turbulence.
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
García Lino, Mary Carolina; Cavieres, Lohengrin A; Zotz, Gerhard; Bader, Maaike Y
2017-04-01
The elevational range of the alpine cushion plant Laretia acaulis (Apiaceae) comprises a cold upper extreme and a dry lower extreme. For this species, we predict reduced growth and increased non-structural carbohydrate (NSC) concentrations (i.e. carbon sink limitation) at both elevational extremes. In a facilitative interaction, these cushions harbor other plant species (beneficiaries). Such interactions appear to reduce reproduction in other cushion species, but not in L. acaulis. However, vegetative effects may be more important in this long-lived species and may be stronger under marginal conditions. We studied growth and NSC concentrations in leaves and stems of L. acaulis collected from cushions along its full elevational range in the Andes of Central Chile. NSC concentrations were lowest and cushions were smaller and much less abundant at the highest elevation. At the lowest elevation, NSC concentrations and cushion sizes were similar to those of intermediate elevations but cushions were somewhat less abundant. NSC concentrations and growth did not change with beneficiary cover at any elevation. Lower NSC concentrations at the upper extreme contradict the sink-limitation hypothesis and may indicate that a lack of warmth is not limiting growth at high-elevation. At the lower extreme, carbon gain and growth do not appear more limiting than at intermediate elevations. The lower population density at both extremes suggests that the regeneration niche exerts important limitations to this species' distribution. The lack of an effect of beneficiaries on reproduction and vegetative performance suggests that the interaction between L. acaulis and its beneficiaries is probably commensalistic.
Spence, Emma Suzuki; Beck, Jeffrey L; Gregory, Andrew J
2017-01-01
Greater sage-grouse (Centrocercus urophasianus) occupy sagebrush (Artemisia spp.) habitats in 11 western states and 2 Canadian provinces. In September 2015, the U.S. Fish and Wildlife Service announced the listing status for sage-grouse had changed from warranted but precluded to not warranted. The primary reason cited for this change of status was that the enactment of new regulatory mechanisms was sufficient to protect sage-grouse populations. One such plan is the 2008, Wyoming Sage Grouse Executive Order (SGEO), enacted by Governor Freudenthal. The SGEO identifies "Core Areas" that are to be protected by keeping them relatively free from further energy development and limiting other forms of anthropogenic disturbances near active sage-grouse leks. Using the Wyoming Game and Fish Department's sage-grouse lek count database and the Wyoming Oil and Gas Conservation Commission database of oil and gas well locations, we investigated the effectiveness of Wyoming's Core Areas, specifically: 1) how well Core Areas encompass the distribution of sage-grouse in Wyoming, 2) whether Core Area leks have a reduced probability of lek collapse, and 3) what, if any, edge effects intensification of oil and gas development adjacent to Core Areas may be having on Core Area populations. Core Areas contained 77% of male sage-grouse attending leks and 64% of active leks. Using Bayesian binomial probability analysis, we found an average 10.9% probability of lek collapse in Core Areas and an average 20.4% probability of lek collapse outside Core Areas. Using linear regression, we found development density outside Core Areas was related to the probability of lek collapse inside Core Areas. Specifically, probability of collapse among leks >4.83 km from inside Core Area boundaries was significantly related to well density within 1.61 km (1-mi) and 4.83 km (3-mi) outside of Core Area boundaries. Collectively, these data suggest that the Wyoming Core Area Strategy has benefited sage-grouse and sage-grouse habitat conservation; however, additional guidelines limiting development densities adjacent to Core Areas may be necessary to effectively protect Core Area populations.
Evaluation of an Ensemble Dispersion Calculation.
NASA Astrophysics Data System (ADS)
Draxler, Roland R.
2003-02-01
A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.
optBINS: Optimal Binning for histograms
NASA Astrophysics Data System (ADS)
Knuth, Kevin H.
2018-03-01
optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.
Sugita, Mitsuro; Weatherbee, Andrew; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-07-01
The probability density function (PDF) of light scattering intensity can be used to characterize the scattering medium. We have recently shown that in optical coherence tomography (OCT), a PDF formalism can be sensitive to the number of scatterers in the probed scattering volume and can be represented by the K-distribution, a functional descriptor for non-Gaussian scattering statistics. Expanding on this initial finding, here we examine polystyrene microsphere phantoms with different sphere sizes and concentrations, and also human skin and fingernail in vivo. It is demonstrated that the K-distribution offers an accurate representation for the measured OCT PDFs. The behavior of the shape parameter of K-distribution that best fits the OCT scattering results is investigated in detail, and the applicability of this methodology for biological tissue characterization is demonstrated and discussed.