Sample records for linear density estimation

  1. Estimating cosmic velocity fields from density fields and tidal tensors

    NASA Astrophysics Data System (ADS)

    Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan

    2012-10-01

    In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.

  2. Simplified large African carnivore density estimators from track indices.

    PubMed

    Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J

    2016-01-01

    The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y  =  αx  + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P  > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P  < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.

  3. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  4. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  5. A log-linear model approach to estimation of population size using the line-transect sampling method

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1978-01-01

    The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.

  6. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    NASA Astrophysics Data System (ADS)

    Sorini, D.

    2017-04-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ``light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ~ 0.80 h Mpc-1 and within 10% up to k ~ 0.94 h Mpc-1, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.

  7. The influence of linear elements on plant species diversity of Mediterranean rural landscapes: assessment of different indices and statistical approaches.

    PubMed

    García del Barrio, J M; Ortega, M; Vázquez De la Cueva, A; Elena-Rosselló, R

    2006-08-01

    This paper mainly aims to study the linear element influence on the estimation of vascular plant species diversity in five Mediterranean landscapes modeled as land cover patch mosaics. These landscapes have several core habitats and a different set of linear elements--habitat edges or ecotones, roads or railways, rivers, streams and hedgerows on farm land--whose plant composition were examined. Secondly, it aims to check plant diversity estimation in Mediterranean landscapes using parametric and non-parametric procedures, with two indices: Species richness and Shannon index. Land cover types and landscape linear elements were identified from aerial photographs. Their spatial information was processed using GIS techniques. Field plots were selected using a stratified sampling design according to relieve and tree density of each habitat type. A 50x20 m2 multi-scale sampling plot was designed for the core habitats and across the main landscape linear elements. Richness and diversity of plant species were estimated by comparing the observed field data to ICE (Incidence-based Coverage Estimator) and ACE (Abundance-based Coverage Estimator) non-parametric estimators. The species density, percentage of unique species, and alpha diversity per plot were significantly higher (p < 0.05) in linear elements than in core habitats. ICE estimate of number of species was 32% higher than of ACE estimate, which did not differ significantly from the observed values. Accumulated species richness in core habitats together with linear elements, were significantly higher than those recorded only in the core habitats in all the landscapes. Conversely, Shannon diversity index did not show significant differences.

  8. An optimally weighted estimator of the linear power spectrum disentangling the growth of density perturbations across galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorini, D., E-mail: sorini@mpia-hd.mpg.de

    2017-04-01

    Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. Anmore » analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ∼ 0.80 h Mpc{sup −1} and within 10% up to k ∼ 0.94 h Mpc{sup −1}, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.« less

  9. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  10. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  11. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  12. Quantifying Mold Biomass on Gypsum Board: Comparison of Ergosterol and Beta-N-Acetylhexosaminidase as Mold Biomass Parameters

    PubMed Central

    Reeslev, M.; Miller, M.; Nielsen, K. F.

    2003-01-01

    Two mold species, Stachybotrys chartarum and Aspergillus versicolor, were inoculated onto agar overlaid with cellophane, allowing determination of a direct measurement of biomass density by weighing. Biomass density, ergosterol content, and beta-N-acetylhexosaminidase (3.2.1.52) activity were monitored from inoculation to stationary phase. Regression analysis showed a good linear correlation to biomass density for both ergosterol content and beta-N-acetylhexosaminidase activity. The same two mold species were inoculated onto wallpapered gypsum board, from which a direct biomass measurement was not possible. Growth was measured as an increase in ergosterol content and beta-N-acetylhexosaminidase activity. A good linear correlation was seen between ergosterol content and beta-N-acetylhexosaminidase activity. From the experiments performed on agar medium, conversion factors (CFs) for estimating biomass density from ergosterol content and beta-N-acetylhexosaminidase activity were determined. The CFs were used to estimate the biomass density of the molds grown on gypsum board. The biomass densities estimated from ergosterol content and beta-N-acetylhexosaminidase activity data gave similar results, showing significantly slower growth and lower stationary-phase biomass density on gypsum board than on agar. PMID:12839773

  13. Using nonlinear quantile regression to estimate the self-thinning boundary curve

    Treesearch

    Quang V. Cao; Thomas J. Dean

    2015-01-01

    The relationship between tree size (quadratic mean diameter) and tree density (number of trees per unit area) has been a topic of research and discussion for many decades. Starting with Reineke in 1933, the maximum size-density relationship, on a log-log scale, has been assumed to be linear. Several techniques, including linear quantile regression, have been employed...

  14. Scalable population estimates using spatial-stream-network (SSN) models, fish density surveys, and national geospatial database frameworks for streams

    Treesearch

    Daniel J. Isaak; Jay M. Ver Hoef; Erin E. Peterson; Dona L. Horan; David E. Nagel

    2017-01-01

    Population size estimates for stream fishes are important for conservation and management, but sampling costs limit the extent of most estimates to small portions of river networks that encompass 100s–10 000s of linear kilometres. However, the advent of large fish density data sets, spatial-stream-network (SSN) models that benefit from nonindependence among samples,...

  15. Local existence of solutions to the Euler-Poisson system, including densities without compact support

    NASA Astrophysics Data System (ADS)

    Brauer, Uwe; Karp, Lavi

    2018-01-01

    Local existence and well posedness for a class of solutions for the Euler Poisson system is shown. These solutions have a density ρ which either falls off at infinity or has compact support. The solutions have finite mass, finite energy functional and include the static spherical solutions for γ = 6/5. The result is achieved by using weighted Sobolev spaces of fractional order and a new non-linear estimate which allows to estimate the physical density by the regularised non-linear matter variable. Gamblin also has studied this setting but using very different functional spaces. However we believe that the functional setting we use is more appropriate to describe a physical isolated body and more suitable to study the Newtonian limit.

  16. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  17. Forecasting outbreaks of the Douglas-fir tussock moth from lower crown cocoon samples.

    Treesearch

    Richard R. Mason; Donald W. Scott; H. Gene Paul

    1993-01-01

    A predictive technique using a simple linear regression was developed to forecast the midcrown density of small tussock moth larvae from estimates of cocoon density in the previous generation. The regression estimator was derived from field samples of cocoons and larvae taken from a wide range of nonoutbreak tussock moth populations. The accuracy of the predictions was...

  18. Estimating the population size and colony boundary of subterranean termites by using the density functions of directionally averaged capture probability.

    PubMed

    Su, Nan-Yao; Lee, Sang-Hee

    2008-04-01

    Marked termites were released in a linear-connected foraging arena, and the spatial heterogeneity of their capture probabilities was averaged for both directions at distance r from release point to obtain a symmetrical distribution, from which the density function of directionally averaged capture probability P(x) was derived. We hypothesized that as marked termites move into the population and given sufficient time, the directionally averaged capture probability may reach an equilibrium P(e) over the distance r and thus satisfy the equal mixing assumption of the mark-recapture protocol. The equilibrium capture probability P(e) was used to estimate the population size N. The hypothesis was tested in a 50-m extended foraging arena to simulate the distance factor of field colonies of subterranean termites. Over the 42-d test period, the density functions of directionally averaged capture probability P(x) exhibited four phases: exponential decline phase, linear decline phase, equilibrium phase, and postequilibrium phase. The equilibrium capture probability P(e), derived as the intercept of the linear regression during the equilibrium phase, correctly projected N estimates that were not significantly different from the known number of workers in the arena. Because the area beneath the probability density function is a constant (50% in this study), preequilibrium regression parameters and P(e) were used to estimate the population boundary distance 1, which is the distance between the release point and the boundary beyond which the population is absent.

  19. Hölder Regularity of the 2D Dual Semigeostrophic Equations via Analysis of Linearized Monge-Ampère Equations

    NASA Astrophysics Data System (ADS)

    Le, Nam Q.

    2018-05-01

    We obtain the Hölder regularity of time derivative of solutions to the dual semigeostrophic equations in two dimensions when the initial potential density is bounded away from zero and infinity. Our main tool is an interior Hölder estimate in two dimensions for an inhomogeneous linearized Monge-Ampère equation with right hand side being the divergence of a bounded vector field. As a further application of our Hölder estimate, we prove the Hölder regularity of the polar factorization for time-dependent maps in two dimensions with densities bounded away from zero and infinity. Our applications improve previous work by G. Loeper who considered the cases of densities sufficiently close to a positive constant.

  20. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less

  1. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less

  2. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  3. Retrieval of Spatio-temporal Distributions of Particle Parameters from Multiwavelength Lidar Measurements Using the Linear Estimation Technique and Comparison with AERONET

    NASA Technical Reports Server (NTRS)

    Veselovskii, I.; Whiteman, D. N.; Korenskiy, M.; Kolgotin, A.; Dubovik, O.; Perez-Ramirez, D.; Suvorina, A.

    2013-01-01

    The results of the application of the linear estimation technique to multiwavelength Raman lidar measurements performed during the summer of 2011 in Greenbelt, MD, USA, are presented. We demonstrate that multiwavelength lidars are capable not only of providing vertical profiles of particle properties but also of revealing the spatio-temporal evolution of aerosol features. The nighttime 3 Beta + 1 alpha lidar measurements on 21 and 22 July were inverted to spatio-temporal distributions of particle microphysical parameters, such as volume, number density, effective radius and the complex refractive index. The particle volume and number density show strong variation during the night, while the effective radius remains approximately constant. The real part of the refractive index demonstrates a slight decreasing tendency in a region of enhanced extinction coefficient. The linear estimation retrievals are stable and provide time series of particle parameters as a function of height at 4 min resolution. AERONET observations are compared with multiwavelength lidar retrievals showing good agreement.

  4. Nonlinear Statistical Estimation with Numerical Maximum Likelihood

    DTIC Science & Technology

    1974-10-01

    probably most directly attributable to the speed, precision and compactness of the linear programming algorithm exercised ; the mutual primal-dual...discriminant analysis is to classify the individual as a member of T# or IT, 1 2 according to the relative...Introduction to the Dissertation 1 Introduction to Statistical Estimation Theory 3 Choice of Estimator.. .Density Functions 12 Choice of Estimator

  5. Direct estimations of linear and nonlinear functionals of a quantum state.

    PubMed

    Ekert, Artur K; Alves, Carolina Moura; Oi, Daniel K L; Horodecki, Michał; Horodecki, Paweł; Kwek, L C

    2002-05-27

    We present a simple quantum network, based on the controlled-SWAP gate, that can extract certain properties of quantum states without recourse to quantum tomography. It can be used as a basic building block for direct quantum estimations of both linear and nonlinear functionals of any density operator. The network has many potential applications ranging from purity tests and eigenvalue estimations to direct characterization of some properties of quantum channels. Experimental realizations of the proposed network are within the reach of quantum technology that is currently being developed.

  6. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach.

    PubMed

    Demidenko, Eugene

    2017-09-01

    The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.

  7. Gradient-based stochastic estimation of the density matrix

    NASA Astrophysics Data System (ADS)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  8. Model Parameterization and P-wave AVA Direct Inversion for Young's Impedance

    NASA Astrophysics Data System (ADS)

    Zong, Zhaoyun; Yin, Xingyao

    2017-05-01

    AVA inversion is an important tool for elastic parameters estimation to guide the lithology prediction and "sweet spot" identification of hydrocarbon reservoirs. The product of the Young's modulus and density (named as Young's impedance in this study) is known as an effective lithology and brittleness indicator of unconventional hydrocarbon reservoirs. Density is difficult to predict from seismic data, which renders the estimation of the Young's impedance inaccurate in conventional approaches. In this study, a pragmatic seismic AVA inversion approach with only P-wave pre-stack seismic data is proposed to estimate the Young's impedance to avoid the uncertainty brought by density. First, based on the linearized P-wave approximate reflectivity equation in terms of P-wave and S-wave moduli, the P-wave approximate reflectivity equation in terms of the Young's impedance is derived according to the relationship between P-wave modulus, S-wave modulus, Young's modulus and Poisson ratio. This equation is further compared to the exact Zoeppritz equation and the linearized P-wave approximate reflectivity equation in terms of P- and S-wave velocities and density, which illustrates that this equation is accurate enough to be used for AVA inversion when the incident angle is within the critical angle. Parameter sensitivity analysis illustrates that the high correlation between the Young's impedance and density render the estimation of the Young's impedance difficult. Therefore, a de-correlation scheme is used in the pragmatic AVA inversion with Bayesian inference to estimate Young's impedance only with pre-stack P-wave seismic data. Synthetic examples demonstrate that the proposed approach is able to predict the Young's impedance stably even with moderate noise and the field data examples verify the effectiveness of the proposed approach in Young's impedance estimation and "sweet spots" evaluation.

  9. Estimating Evapotranspiration Of Orange Orchards Using Surface Renewal And Remote Sensing Techniques

    NASA Astrophysics Data System (ADS)

    Consoli, S.; Russo, A.; Snyder, R.

    2006-08-01

    Surface renewal (SR) analysis was utilized to calculate sensible heat flux density from high frequency temperature measurements above orange orchard canopies during 2005 in eastern Sicily (Italy). The H values were employed to estimate latent heat flux density (LE) using measured net radiation (Rn) and soil heat flux density (G) in the energy balance (EB) equation. Crop coefficients were determined by calculating the ratio Kc=ETa/ETo, with reference ETo derived from the daily Penman-Monteith equation. The estimated daily Kc values showed an average of about 0.75 for canopy covers having about 70% ground shading and 80% of PAR light interception. Remote sensing estimates of Kc and ET fluxes were compared with those measured by SR-EB. IKONOS satellite estimates of Kc and NDVI were linearly correlated for the orchard stands.

  10. Estimation of percentage breast tissue density: comparison between digital mammography (2D full field digital mammography) and digital breast tomosynthesis according to different BI-RADS categories.

    PubMed

    Tagliafico, A S; Tagliafico, G; Cavagnetto, F; Calabrese, M; Houssami, N

    2013-11-01

    To compare breast density estimated from two-dimensional full-field digital mammography (2D FFDM) and from digital breast tomosynthesis (DBT) according to different Breast Imaging-Reporting and Data System (BI-RADS) categories, using automated software. Institutional review board approval and written informed patient consent were obtained. DBT and 2D FFDM were performed in the same patients to allow within-patient comparison. A total of 160 consecutive patients (mean age: 50±14 years; mean body mass index: 22±3) were included to create paired data sets of 40 patients for each BI-RADS category. Automatic software (MedDensity(©), developed by Giulio Tagliafico) was used to compare the percentage breast density between DBT and 2D FFDM. The estimated breast percentage density obtained using DBT and 2D FFDM was examined for correlation with the radiologists' visual BI-RADS density classification. The 2D FFDM differed from DBT by 16.0% in BI-RADS Category 1, by 11.9% in Category 2, by 3.5% in Category 3 and by 18.1% in Category 4. These differences were highly significant (p<0.0001). There was a good correlation between the BI-RADS categories and the density evaluated using 2D FFDM and DBT (r=0.56, p<0.01 and r=0.48, p<0.01, respectively). Using DBT, breast density values were lower than those obtained using 2D FFDM, with a non-linear relationship across the BI-RADS categories. These data are relevant for clinical practice and research studies using density in determining the risk. On DBT, breast density values were lower than with 2D FFDM, with a non-linear relationship across the classical BI-RADS categories.

  11. An Application of the H-Function to Curve-Fitting and Density Estimation.

    DTIC Science & Technology

    1983-12-01

    equations into a model that is linear in its coefficients. Nonlinear least squares estimation is a relatively new area developed to accomodate models which...to converge on a solution (10:9-10). For the simple linear model and when general assump- tions are made, the Gauss-Markov theorem states that the...distribution. For example, if the analyst wants to model the time between arrivals to a queue for a computer simulation, he infers the true probability

  12. Density-dependent host choice by disease vectors: epidemiological implications of the ideal free distribution.

    PubMed

    Basáñez, María-Gloria; Razali, Karina; Renz, Alfons; Kelly, David

    2007-03-01

    The proportion of vector blood meals taken on humans (the human blood index, h) appears as a squared term in classical expressions of the basic reproduction ratio (R(0)) for vector-borne infections. Consequently, R(0) varies non-linearly with h. Estimates of h, however, constitute mere snapshots of a parameter that is predicted, from evolutionary theory, to vary with vector and host abundance. We test this prediction using a population dynamics model of river blindness assuming that, before initiation of vector control or chemotherapy, recorded measures of vector density and human infection accurately represent endemic equilibrium. We obtain values of h that satisfy the condition that the effective reproduction ratio (R(e)) must equal 1 at equilibrium. Values of h thus obtained decrease with vector density, decrease with the vector:human ratio and make R(0) respond non-linearly rather than increase linearly with vector density. We conclude that if vectors are less able to obtain human blood meals as their density increases, antivectorial measures may not lead to proportional reductions in R(0) until very low vector levels are achieved. Density dependence in the contact rate of infectious diseases transmitted by insects may be an important non-linear process with implications for their epidemiology and control.

  13. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.

  14. Guaiacol hydrodeoxygenation mechanism on Pt(111): Insights from density functional theory and linear free energy relations

    USDA-ARS?s Scientific Manuscript database

    In this study density functional theory (DFT) was used to study the adsorption of guaiacol and its initial hydrodeoxygenation (HDO) reactions on Pt(111). Previously reported Brønsted–Evans–Polanyi (BEP) correlations for small open chain molecules are found to be inadequate in estimating the reaction...

  15. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys

    PubMed Central

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov–Malyshev–Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  16. Adaptive local linear regression with application to printer color management.

    PubMed

    Gupta, Maya R; Garcia, Eric K; Chin, Erika

    2008-06-01

    Local learning methods, such as local linear regression and nearest neighbor classifiers, base estimates on nearby training samples, neighbors. Usually, the number of neighbors used in estimation is fixed to be a global "optimal" value, chosen by cross validation. This paper proposes adapting the number of neighbors used for estimation to the local geometry of the data, without need for cross validation. The term enclosing neighborhood is introduced to describe a set of neighbors whose convex hull contains the test point when possible. It is proven that enclosing neighborhoods yield bounded estimation variance under some assumptions. Three such enclosing neighborhood definitions are presented: natural neighbors, natural neighbors inclusive, and enclosing k-NN. The effectiveness of these neighborhood definitions with local linear regression is tested for estimating lookup tables for color management. Significant improvements in error metrics are shown, indicating that enclosing neighborhoods may be a promising adaptive neighborhood definition for other local learning tasks as well, depending on the density of training samples.

  17. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    NASA Astrophysics Data System (ADS)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in model derived estimates and GHCN-D observations were assessed using time-series graphs of 2016-2017 winter season SLR observations and climatological estimates, as well as calculating RMSE and variance between estimated and observed values.

  18. A novel linear physical model for remote sensing of snow wetness and snow density using the visible and infrared bands

    NASA Astrophysics Data System (ADS)

    Varade, D. M.; Dikshit, O.

    2017-12-01

    Modeling and forecasting of snowmelt runoff are significant for understanding the hydrological processes in the cryosphere which requires timely information regarding snow physical properties such as liquid water content and density of snow in the topmost layer of the snowpack. Both the seasonal runoffs and avalanche forecasting are vastly dependent on the inherent physical characteristics of the snowpack which are conventionally measured by field surveys in difficult terrains at larger impending costs and manpower. With advances in remote sensing technology and the increase in the availability of satellite data, the frequency and extent of these surveys could see a declining trend in future. In this study, we present a novel approach for estimating snow wetness and snow density using visible and infrared bands that are available with most multi-spectral sensors. We define a trapezoidal feature space based on the spectral reflectance in the near infrared band and the Normalized Differenced Snow Index (NDSI), referred to as NIR-NDSI space, where dry snow and wet snow are observed in the left diagonal upper and lower right corners, respectively. The corresponding pixels are extracted by approximating the dry and wet edges which are used to develop a linear physical model to estimate snow wetness. Snow density is then estimated using the modeled snow wetness. Although the proposed approach has used Sentinel-2 data, it can be extended to incorporate data from other multi-spectral sensors. The estimated values for snow wetness and snow density show a high correlation with respect to in-situ measurements. The proposed model opens a new avenue for remote sensing of snow physical properties using multi-spectral data, which were limited in the literature.

  19. Encircling the dark: constraining dark energy via cosmic density in spheres

    NASA Astrophysics Data System (ADS)

    Codis, S.; Pichon, C.; Bernardeau, F.; Uhlemann, C.; Prunet, S.

    2016-08-01

    The recently published analytic probability density function for the mildly non-linear cosmic density field within spherical cells is used to build a simple but accurate maximum likelihood estimate for the redshift evolution of the variance of the density, which, as expected, is shown to have smaller relative error than the sample variance. This estimator provides a competitive probe for the equation of state of dark energy, reaching a few per cent accuracy on wp and wa for a Euclid-like survey. The corresponding likelihood function can take into account the configuration of the cells via their relative separations. A code to compute one-cell-density probability density functions for arbitrary initial power spectrum, top-hat smoothing and various spherical-collapse dynamics is made available online, so as to provide straightforward means of testing the effect of alternative dark energy models and initial power spectra on the low-redshift matter distribution.

  20. Compensatory selection for roads over natural linear features by wolves in northern Ontario: Implications for caribou conservation

    PubMed Central

    Patterson, Brent R.; Anderson, Morgan L.; Rodgers, Arthur R.; Vander Vennen, Lucas M.; Fryxell, John M.

    2017-01-01

    Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism–a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors–has negative consequences for the viability of woodland caribou. PMID:29117234

  1. Compensatory selection for roads over natural linear features by wolves in northern Ontario: Implications for caribou conservation.

    PubMed

    Newton, Erica J; Patterson, Brent R; Anderson, Morgan L; Rodgers, Arthur R; Vander Vennen, Lucas M; Fryxell, John M

    2017-01-01

    Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism-a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors-has negative consequences for the viability of woodland caribou.

  2. Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials

    PubMed Central

    Carleton, James B.; D'Amore, Antonio; Feaver, Kristen R.; Rodin, Gregory J.; Sacks, Michael S.

    2014-01-01

    Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. The present work addresses these issues in two ways. First, using methods of geometric probability we develop theoretical estimates for the mean linear and areal fiber intersection densities for two-dimensional fibrous networks. These densities are expressed in terms of the fiber density and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of two-dimensional fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of SEM images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. The methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data. PMID:25311685

  3. Density of Jatropha curcas Seed Oil and its Methyl Esters: Measurement and Estimations

    NASA Astrophysics Data System (ADS)

    Veny, Harumi; Baroutian, Saeid; Aroua, Mohamed Kheireddine; Hasan, Masitah; Raman, Abdul Aziz; Sulaiman, Nik Meriam Nik

    2009-04-01

    Density data as a function of temperature have been measured for Jatropha curcas seed oil, as well as biodiesel jatropha methyl esters at temperatures from above their melting points to 90 ° C. The data obtained were used to validate the method proposed by Spencer and Danner using a modified Rackett equation. The experimental and estimated density values using the modified Rackett equation gave almost identical values with average absolute percent deviations less than 0.03% for the jatropha oil and 0.04% for the jatropha methyl esters. The Janarthanan empirical equation was also employed to predict jatropha biodiesel densities. This equation performed equally well with average absolute percent deviations within 0.05%. Two simple linear equations for densities of jatropha oil and its methyl esters are also proposed in this study.

  4. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.

  5. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006

    PubMed Central

    Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.

    2016-01-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418

  6. Linear and curvilinear correlations of brain gray matter volume and density with age using voxel-based morphometry with the Akaike information criterion in 291 healthy children.

    PubMed

    Taki, Yasuyuki; Hashizume, Hiroshi; Thyreau, Benjamin; Sassa, Yuko; Takeuchi, Hikaru; Wu, Kai; Kotozaki, Yuka; Nouchi, Rui; Asano, Michiko; Asano, Kohei; Fukuda, Hiroshi; Kawashima, Ryuta

    2013-08-01

    We examined linear and curvilinear correlations of gray matter volume and density in cortical and subcortical gray matter with age using magnetic resonance images (MRI) in a large number of healthy children. We applied voxel-based morphometry (VBM) and region-of-interest (ROI) analyses with the Akaike information criterion (AIC), which was used to determine the best-fit model by selecting which predictor terms should be included. We collected data on brain structural MRI in 291 healthy children aged 5-18 years. Structural MRI data were segmented and normalized using a custom template by applying the diffeomorphic anatomical registration using exponentiated lie algebra (DARTEL) procedure. Next, we analyzed the correlations of gray matter volume and density with age in VBM with AIC by estimating linear, quadratic, and cubic polynomial functions. Several regions such as the prefrontal cortex, the precentral gyrus, and cerebellum showed significant linear or curvilinear correlations between gray matter volume and age on an increasing trajectory, and between gray matter density and age on a decreasing trajectory in VBM and ROI analyses with AIC. Because the trajectory of gray matter volume and density with age suggests the progress of brain maturation, our results may contribute to clarifying brain maturation in healthy children from the viewpoint of brain structure. Copyright © 2012 Wiley Periodicals, Inc.

  7. Comparison of Fatigue Life Estimation Using Equivalent Linearization and Time Domain Simulation Methods

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Dhainaut, Jean-Michel

    2000-01-01

    The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.

  8. On-line estimation of nonlinear physical systems

    USGS Publications Warehouse

    Christakos, G.

    1988-01-01

    Recursive algorithms for estimating states of nonlinear physical systems are presented. Orthogonality properties are rediscovered and the associated polynomials are used to linearize state and observation models of the underlying random processes. This requires some key hypotheses regarding the structure of these processes, which may then take account of a wide range of applications. The latter include streamflow forecasting, flood estimation, environmental protection, earthquake engineering, and mine planning. The proposed estimation algorithm may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. Moreover, the method has several advantages over nonrecursive estimators like disjunctive kriging. To link theory with practice, some numerical results for a simulated system are presented, in which responses from the proposed and extended Kalman algorithms are compared. ?? 1988 International Association for Mathematical Geology.

  9. Feasibility of and agreement between MR imaging and spectroscopic estimation of hepatic proton density fat fraction in children with known or suspected nonalcoholic fatty liver disease.

    PubMed

    Achmad, Emil; Yokoo, Takeshi; Hamilton, Gavin; Heba, Elhamy R; Hooker, Jonathan C; Changchien, Christopher; Schroeder, Michael; Wolfson, Tanya; Gamst, Anthony; Schwimmer, Jeffrey B; Lavine, Joel E; Sirlin, Claude B; Middleton, Michael S

    2015-10-01

    To assess feasibility of and agreement between magnetic resonance imaging (MRI) and magnetic resonance spectroscopy (MRS) for estimating hepatic proton density fat fraction (PDFF) in children with known or suspected nonalcoholic fatty liver disease (NAFLD). Children were included in this study from two previous research studies in each of which three MRI and three MRS acquisitions were obtained. Sequence acceptability, and MRI- and MRS-estimated PDFF were evaluated. Agreement of MRI- with MRS-estimated hepatic PDFF was assessed by linear regression and Bland-Altman analysis. Age, sex, BMI-Z score, acquisition time, and artifact score effects on MRI- and MRS-estimated PDFF agreement were assessed by multiple linear regression. Eighty-six children (61 boys and 25 girls) were included in this study. Slope and intercept from regressing MRS-PDFF on MRI-PDFF were 0.969 and 1.591%, respectively, and the Bland-Altman bias and 95% limits of agreement were 1.17% ± 2.61%. MRI motion artifact score was higher in boys than girls (by 0.21, p = 0.021). Higher BMI-Z score was associated with lower agreement between MRS and MRI (p = 0.045). Hepatic PDFF estimation by both MRI and MRS is feasible, and MRI- and MRS-estimated PDFF agree closely in children with known or suspected NAFLD.

  10. Density of α-pinene, Β-pinene, limonene, and essence of turpentine

    NASA Astrophysics Data System (ADS)

    Tavares Sousa, A.; Nieto de Castro, C. A.

    1992-03-01

    Densities of ga-pinene, Β-pinene, limonene, and essence of turpentine have been measured at 293.15, 298.15, 303.15, 308.15, and 313.15 K, at atmospheric pressure, with a mechanical oscillator densimeter. Benzene and cyclohexane were used as calibration fluids. The precision is of the order of 0.01 kg · m-3, while the accuracy is estimated to be 0.1%. A linear representation of the variation of the density with temperature reproduces the experimental data within 0.2%.

  11. A simple model to predict the biodiesel blend density as simultaneous function of blend percent and temperature.

    PubMed

    Gaonkar, Narayan; Vaidya, R G

    2016-05-01

    A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.

  12. Study of ion-gyroscale fluctuations in low-density L-mode plasmas heated by NBI on KSTAR

    NASA Astrophysics Data System (ADS)

    Lee, W.; Ko, S. H.; Leem, J.; Yun, G. S.; Park, H. K.; Wang, W. X.; Budny, R. V.; Kim, K. W.; Luhmann, N. C., Jr.; The KSTAR Team

    2018-04-01

    Broadband density fluctuations with peak frequency ranging from 150 to 400 kHz were measured using a multichannel microwave imaging reflectometer in core region of the low-density L-mode plasmas heated by neutral beam injection on KSTAR. These fluctuations have been studied by comparing the dominant mode scales estimated from the measurement with those predicted from linear gyrokinetic simulation. The measured poloidal wavenumbers are qualitatively comparable to those of the ‘fastest growing modes’ from simulations, whereas they are larger than those of the ‘transport-dominant modes’ by about a factor of three. The agreement on wavenumbers between the measurement and linear simulation (for the fastest growing modes) is probably due to sufficiently weak E × B flow shear compared to the maximum linear growth rate. Meanwhile, the transport-dominant modes seem to be related to the fluctuations in lower frequencies (˜80-150 kHz) observed in some of the measurement.

  13. Correlation between quantified breast densities from digital mammography and 18F-FDG PET uptake.

    PubMed

    Lakhani, Paras; Maidment, Andrew D A; Weinstein, Susan P; Kung, Justin W; Alavi, Abass

    2009-01-01

    To correlate breast density quantified from digital mammograms with mean and maximum standardized uptake values (SUVs) from positron emission tomography (PET). This was a prospective study that included 56 women with a history of suspicion of breast cancer (mean age 49.2 +/- 9.3 years), who underwent 18F-fluoro-2-deoxyglucose (FDG)-PET imaging of their breasts as well as digital mammography. A computer thresholding algorithm was applied to the contralateral nonmalignant breasts to quantitatively estimate the breast density on digital mammograms. The breasts were also classified into one of four Breast Imaging Reporting and Data System categories for density. Comparisons between SUV and breast density were made using linear regression and the Student's t-test. Linear regression of mean SUV versus average breast density showed a positive relationship with a Pearson's correlation coefficient of R(2) = 0.83. The quantified breast densities and mean SUVs were significantly greater for mammographically dense than nondense breasts (p < 0.0001 for both). The average quantified densities and mean SUVs of the breasts were significantly greater for premenopausal than postmenopausal patients (p < 0.05). 8/51 (16%) of the patients had maximum SUVs that equaled 1.6 or greater. There is a positive linear correlation between quantified breast density on digital mammography and FDG uptake on PET. Menopausal status affects the metabolic activity of normal breast tissue, resulting in higher SUVs in pre- versus postmenopausal patients.

  14. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  15. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    PubMed

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.

  16. Electronic polarizability of light crude oil from optical and dielectric studies

    NASA Astrophysics Data System (ADS)

    George, A. K.; Singh, R. N.

    2017-07-01

    In the present paper we report the temperature dependence of density, refractive indices and dielectric constant of three samples of crude oils. The API gravity number estimated from the temperature dependent density studies revealed that the three samples fall in the category of light oil. The measured data of refractive index and the density are used to evaluate the polarizability of these fluids. Molar refractive index and the molar volume are evaluated through Lorentz-Lorenz equation. The function of the refractive index, FRI , divided by the mass density ρ, is a constant approximately equal to one-third and is invariant with temperature for all the samples. The measured values of the dielectric constant decrease linearly with increasing temperature for all the samples. The dielectric constant estimated from the refractive index measurements using Lorentz-Lorentz equation agrees well with the measured values. The results are promising since all the three measured properties complement each other and offer a simple and reliable method for estimating crude oil properties, in the absence of sufficient data.

  17. Effects of LiDAR point density and landscape context on the retrieval of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, K. K.; Chen, G.; McCarter, J. B.; Meentemeyer, R. K.

    2014-12-01

    Light Detection and Ranging (LiDAR), as an alternative to conventional optical remote sensing, is being increasingly used to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and better data accuracies, which however pose challenges to the procurement and processing of LiDAR data for large-area assessments. Reducing point density cuts data acquisition costs and overcome computational challenges for broad-scale forest management. However, how does that impact the accuracy of biomass estimation in an urban environment containing a great level of anthropogenic disturbances? The main goal of this study is to evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing regions of Charlotte, North Carolina, USA. We used multiple linear regression to establish the statistical relationship between field-measured biomass and predictor variables (PVs) derived from LiDAR point clouds with varying densities. We compared the estimation accuracies between the general Urban Forest models (no discrimination of forest type) and the Forest Type models (evergreen, deciduous, and mixed), which was followed by quantifying the degree to which landscape context influenced biomass estimation. The explained biomass variance of Urban Forest models, adjusted R2, was fairly consistent across the reduced point densities with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models using two representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, signifying the distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest biomass assessment without compromising the accuracy of estimation, which may further be improved using development density.

  18. Spatial and temporal Brook Trout density dynamics: Implications for conservation, management, and monitoring

    USGS Publications Warehouse

    Wagner, Tyler; Jefferson T. Deweber,; Jason Detar,; Kristine, David; John A. Sweka,

    2014-01-01

    Many potential stressors to aquatic environments operate over large spatial scales, prompting the need to assess and monitor both site-specific and regional dynamics of fish populations. We used hierarchical Bayesian models to evaluate the spatial and temporal variability in density and capture probability of age-1 and older Brook Trout Salvelinus fontinalis from three-pass removal data collected at 291 sites over a 37-year time period (1975–2011) in Pennsylvania streams. There was high between-year variability in density, with annual posterior means ranging from 2.1 to 10.2 fish/100 m2; however, there was no significant long-term linear trend. Brook Trout density was positively correlated with elevation and negatively correlated with percent developed land use in the network catchment. Probability of capture did not vary substantially across sites or years but was negatively correlated with mean stream width. Because of the low spatiotemporal variation in capture probability and a strong correlation between first-pass CPUE (catch/min) and three-pass removal density estimates, the use of an abundance index based on first-pass CPUE could represent a cost-effective alternative to conducting multiple-pass removal sampling for some Brook Trout monitoring and assessment objectives. Single-pass indices may be particularly relevant for monitoring objectives that do not require precise site-specific estimates, such as regional monitoring programs that are designed to detect long-term linear trends in density.

  19. Body fat assessed from body density and estimated from skinfold thickness in normal children and children with cystic fibrosis.

    PubMed

    Johnston, J L; Leong, M S; Checkland, E G; Zuberbuhler, P C; Conger, P R; Quinney, H A

    1988-12-01

    Body density and skinfold thickness at four sites were measured in 140 normal boys, 168 normal girls, and 6 boys and 7 girls with cystic fibrosis, all aged 8-14 y. Prediction equations for the normal boys and girls for the estimation of body-fat content from skinfold measurements were derived from linear regression of body density vs the log of the sum of the skinfold thickness. The relationship between body density and the log of the sum of the skinfold measurements differed from normal for the boys and girls with cystic fibrosis because of their high body density even though their large residual volume was corrected for. However the sum of skinfold measurements in the children with cystic fibrosis did not differ from normal. Thus body fat percent of these children with cystic fibrosis was underestimated when calculated from body density and invalid when calculated from skinfold thickness.

  20. The Seismic Tool-Kit (STK): an open source software for seismology and signal processing.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique

    2016-04-01

    We present an open source software project (GNU public license), named STK: Seismic ToolKit, that is dedicated mainly for seismology and signal processing. The STK project that started in 2007, is hosted by SourceForge.net, and count more than 19 500 downloads at the date of writing. The STK project is composed of two main branches: First, a graphical interface dedicated to signal processing (in the SAC format (SAC_ASCII and SAC_BIN): where the signal can be plotted, zoomed, filtered, integrated, derivated, ... etc. (a large variety of IFR and FIR filter is proposed). The estimation of spectral density of the signal are performed via the Fourier transform, with visualization of the Power Spectral Density (PSD) in linear or log scale, and also the evolutive time-frequency representation (or sonagram). The 3-components signals can be also processed for estimating their polarization properties, either for a given window, or either for evolutive windows along the time. This polarization analysis is useful for extracting the polarized noises, differentiating P waves, Rayleigh waves, Love waves, ... etc. Secondly, a panel of Utilities-Program are proposed for working in a terminal mode, with basic programs for computing azimuth and distance in spherical geometry, inter/auto-correlation, spectral density, time-frequency for an entire directory of signals, focal planes, and main components axis, radiation pattern of P waves, Polarization analysis of different waves (including noize), under/over-sampling the signals, cubic-spline smoothing, and linear/non linear regression analysis of data set. A MINimum library of Linear AlGebra (MIN-LINAG) is also provided for computing the main matrix process like: QR/QL decomposition, Cholesky solve of linear system, finding eigen value/eigen vectors, QR-solve/Eigen-solve of linear equations systems ... etc. STK is developed in C/C++, mainly under Linux OS, and it has been also partially implemented under MS-Windows. Usefull links: http://sourceforge.net/projects/seismic-toolkit/ http://sourceforge.net/p/seismic-toolkit/wiki/browse_pages/

  1. Estimating radiated flux density from wildland fires using the raw output of limited bandpass detectors

    Treesearch

    Robert L. Kremens; Matthew B. Dickinson

    2015-01-01

    We have simulated the radiant emission spectra from wildland fires such as would be observed at a scale encompassing the pre-frontal fuel bed, the flaming front and the zone of post-frontal combustion and cooling. For these simulations, we developed a 'mixed-pixel' model where the fire infrared spectrum is estimated as the linear superposition of spectra of...

  2. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  3. Asteroid orbital error analysis: Theory and application

    NASA Technical Reports Server (NTRS)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  4. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536

  5. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    PubMed

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.

  6. Optimal estimation for the satellite attitude using star tracker measurements

    NASA Technical Reports Server (NTRS)

    Lo, J. T.-H.

    1986-01-01

    An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.

  7. The association of very-low-density lipoprotein with ankle-brachial index in peritoneal dialysis patients with controlled serum low-density lipoprotein cholesterol level

    PubMed Central

    2013-01-01

    Background Peripheral artery disease (PAD) represents atherosclerotic disease and is a risk factor for death in peritoneal dialysis (PD) patients, who tend to show an atherogenic lipid profile. In this study, we investigated the relationship between lipid profile and ankle-brachial index (ABI) as an index of atherosclerosis in PD patients with controlled serum low-density lipoprotein (LDL) cholesterol level. Methods Thirty-five PD patients, whose serum LDL cholesterol level was controlled at less than 120mg/dl, were enrolled in this cross-sectional study in Japan. The proportions of cholesterol level to total cholesterol level (cholesterol proportion) in 20 lipoprotein fractions and the mean size of lipoprotein particles were measured using an improved method, namely, high-performance gel permeation chromatography. Multivariate linear regression analysis was adjusted for diabetes mellitus and cardiovascular and/or cerebrovascular diseases. Results The mean (standard deviation) age was 61.6 (10.5) years; PD vintage, 38.5 (28.1) months; ABI, 1.07 (0.22). A low ABI (0.9 or lower) was observed in 7 patients (low-ABI group). The low-ABI group showed significantly higher cholesterol proportions in the chylomicron fraction and large very-low-density lipoproteins (VLDLs) (Fractions 3–5) than the high-ABI group (ABI>0.9). Adjusted multivariate linear regression analysis showed that ABI was negatively associated with serum VLDL cholesterol level (parameter estimate=-0.00566, p=0.0074); the cholesterol proportions in large VLDLs (Fraction 4, parameter estimate=-3.82, p=0.038; Fraction 5, parameter estimate=-3.62, p=0.0039) and medium VLDL (Fraction 6, parameter estimate=-3.25, p=0.014); and the size of VLDL particles (parameter estimate=-0.0352, p=0.032). Conclusions This study showed that the characteristics of VLDL particles were associated with ABI among PD patients. Lowering serum VLDL level may be an effective therapy against atherosclerosis in PD patients after the control of serum LDL cholesterol level. PMID:24093487

  8. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  9. Slope Estimation in Noisy Piecewise Linear Functions✩

    PubMed Central

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2014-01-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure. PMID:25419020

  10. Slope Estimation in Noisy Piecewise Linear Functions.

    PubMed

    Ingle, Atul; Bucklew, James; Sethares, William; Varghese, Tomy

    2015-03-01

    This paper discusses the development of a slope estimation algorithm called MAPSlope for piecewise linear data that is corrupted by Gaussian noise. The number and locations of slope change points (also known as breakpoints) are assumed to be unknown a priori though it is assumed that the possible range of slope values lies within known bounds. A stochastic hidden Markov model that is general enough to encompass real world sources of piecewise linear data is used to model the transitions between slope values and the problem of slope estimation is addressed using a Bayesian maximum a posteriori approach. The set of possible slope values is discretized, enabling the design of a dynamic programming algorithm for posterior density maximization. Numerical simulations are used to justify choice of a reasonable number of quantization levels and also to analyze mean squared error performance of the proposed algorithm. An alternating maximization algorithm is proposed for estimation of unknown model parameters and a convergence result for the method is provided. Finally, results using data from political science, finance and medical imaging applications are presented to demonstrate the practical utility of this procedure.

  11. Canopy reflectance modelling of semiarid vegetation

    NASA Technical Reports Server (NTRS)

    Franklin, Janet

    1994-01-01

    Three different types of remote sensing algorithms for estimating vegetation amount and other land surface biophysical parameters were tested for semiarid environments. These included statistical linear models, the Li-Strahler geometric-optical canopy model, and linear spectral mixture analysis. The two study areas were the National Science Foundation's Jornada Long Term Ecological Research site near Las Cruces, NM, in the northern Chihuahuan desert, and the HAPEX-Sahel site near Niamey, Niger, in West Africa, comprising semiarid rangeland and subtropical crop land. The statistical approach (simple and multiple regression) resulted in high correlations between SPOT satellite spectral reflectance and shrub and grass cover, although these correlations varied with the spatial scale of aggregation of the measurements. The Li-Strahler model produced estimated of shrub size and density for both study sites with large standard errors. In the Jornada, the estimates were accurate enough to be useful for characterizing structural differences among three shrub strata. In Niger, the range of shrub cover and size in short-fallow shrublands is so low that the necessity of spatially distributed estimation of shrub size and density is questionable. Spectral mixture analysis of multiscale, multitemporal, multispectral radiometer data and imagery for Niger showed a positive relationship between fractions of spectral endmembers and surface parameters of interest including soil cover, vegetation cover, and leaf area index.

  12. Habitat use of woodpeckers in the Big Woods of eastern Arkansas

    USGS Publications Warehouse

    Krementz, David G.; Lehnen, Sarah E.; Luscier, J.D.

    2012-01-01

    The Big Woods of eastern Arkansas contain some of the highest densities of woodpeckers recorded within bottomland hardwood forests of the southeastern United States. A better understanding of habitat use patterns by these woodpeckers is a priority for conservationists seeking to maintain these high densities in the Big Woods and the Lower Mississippi Alluvial Valley as a whole. Hence, we used linear mixed-effects and linear models to estimate the importance of habitat characteristics to woodpecker density in the Big Woods during the breeding seasons of 2006 and 2007 and the winter of 2007. Northern flicker Colaptes auratus density was negatively related to tree density both for moderate (. 25 cm diameter at breast height) and larger trees (>61 cm diameter at breast height). Red-headed woodpeckers Melanerpes erythrocephalus also had a negative relationship with density of large (. 61 cm diameter at breast height) trees. Bark disfiguration (an index of tree health) was negatively related to red-bellied woodpecker Melanerpes carolinus and yellow-bellied sapsucker Sphyrapicus varius densities. No measured habitat variables explained pileated woodpecker Dryocopus pileatus density. Overall, the high densities of woodpeckers observed in our study suggest that the current forest management of the Big Woods of Arkansas is meeting the nesting, roosting, and foraging requirements for these birds.

  13. Energy boost in laser wakefield accelerators using sharp density transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Döpp, A.; Guillaume, E.; Thaury, C.

    The energy gain in laser wakefield accelerators is limited by dephasing between the driving laser pulse and the highly relativistic electrons in its wake. Since this phase depends on both the driver and the cavity length, the effects of dephasing can be mitigated with appropriate tailoring of the plasma density along propagation. Preceding studies have discussed the prospects of continuous phase-locking in the linear wakefield regime. However, most experiments are performed in the highly non-linear regime and rely on self-guiding of the laser pulse. Due to the complexity of the driver evolution in this regime, it is much more difficultmore » to achieve phase locking. As an alternative, we study the scenario of rapid rephasing in sharp density transitions, as was recently demonstrated experimentally. Starting from a phenomenological model, we deduce expressions for the electron energy gain in such density profiles. The results are in accordance with particle-in-cell simulations, and we present gain estimations for single and multiple stages of rephasing.« less

  14. Four-Component Damped Density Functional Response Theory Study of UV/Vis Absorption Spectra and Phosphorescence Parameters of Group 12 Metal-Substituted Porphyrins.

    PubMed

    Fransson, Thomas; Saue, Trond; Norman, Patrick

    2016-05-10

    The influences of group 12 (Zn, Cd, Hg) metal-substitution on the valence spectra and phosphorescence parameters of porphyrins (P) have been investigated in a relativistic setting. In order to obtain valence spectra, this study reports the first application of the damped linear response function, or complex polarization propagator, in the four-component density functional theory framework [as formulated in Villaume et al. J. Chem. Phys. 2010 , 133 , 064105 ]. It is shown that the steep increase in the density of states as due to the inclusion of spin-orbit coupling yields only minor changes in overall computational costs involved with the solution of the set of linear response equations. Comparing single-frequency to multifrequency spectral calculations, it is noted that the number of iterations in the iterative linear equation solver per frequency grid-point decreases monotonously from 30 to 0.74 as the number of frequency points goes from one to 19. The main heavy-atom effect on the UV/vis-absorption spectra is indirect and attributed to the change of point group symmetry due to metal-substitution, and it is noted that substitutions using heavier atoms yield small red-shifts of the intense Soret-band. Concerning phosphorescence parameters, the adoption of a four-component relativistic setting enables the calculation of such properties at a linear order of response theory, and any higher-order response functions do not need to be considered-a real, conventional, form of linear response theory has been used for the calculation of these parameters. For the substituted porphyrins, electronic coupling between the lowest triplet states is strong and results in theoretical estimates of lifetimes that are sensitive to the wave function and electron density parametrization. With this in mind, we report our best estimates of the phosphorescence lifetimes to be 460, 13.8, 11.2, and 0.00155 s for H2P, ZnP, CdP, and HgP, respectively, with the corresponding transition energies being equal to 1.46, 1.50, 1.38, and 0.89 eV.

  15. Bose-Einstein condensation of the classical axion field in cosmology?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Sacha; Elmer, Martin, E-mail: s.davidson@ipnl.in2p3.fr, E-mail: m.elmer@ipnl.in2p3.fr

    The axion is a motivated cold dark matter candidate, which it would be interesting to distinguish from weakly interacting massive particles. Sikivie has suggested that axions could behave differently during non-linear galaxy evolution, if they form a Bose-Einstein condensate, and argues that ''gravitational thermalisation'' drives them to a Bose-Einstein condensate during the radiation dominated era. Using classical equations of motion during linear structure formation, we explore whether the gravitational interactions of axions can generate enough entropy. At linear order in G{sub N}, we interpret that the principle activities of gravity are to expand the Universe and grow density fluctuations. Tomore » quantify the rate of entropy creation we use the anisotropic stress to estimate a short dissipation scale for axions which does not confirm previous estimates of their gravitational thermalisation rate.« less

  16. Nonlinear Thermal Instability in Compressible Viscous Flows Without Heat Conductivity

    NASA Astrophysics Data System (ADS)

    Jiang, Fei

    2018-04-01

    We investigate the thermal instability of a smooth equilibrium state, in which the density function satisfies Schwarzschild's (instability) condition, to a compressible heat-conducting viscous flow without heat conductivity in the presence of a uniform gravitational field in a three-dimensional bounded domain. We show that the equilibrium state is linearly unstable by a modified variational method. Then, based on the constructed linearly unstable solutions and a local well-posedness result of classical solutions to the original nonlinear problem, we further construct the initial data of linearly unstable solutions to be the one of the original nonlinear problem, and establish an appropriate energy estimate of Gronwall-type. With the help of the established energy estimate, we finally show that the equilibrium state is nonlinearly unstable in the sense of Hadamard by a careful bootstrap instability argument.

  17. Onset of density-driven instabilities in fractured aquifers

    NASA Astrophysics Data System (ADS)

    Jafari Raad, Seyed Mostafa; Hassanzadeh, Hassan

    2018-04-01

    Linear stability analysis is conducted to study the onset of density-driven convection involved in solubility trapping of C O2 in fractured aquifers. The effect of physical properties of a fracture network on the stability of a diffusive boundary layer in a saturated fractured porous media is investigated using the dual porosity concept. Linear stability analysis results show that both fracture interporosity flow and fracture storativity play an important role in the stability behavior of the system. It is shown that a diffusive boundary layer under the gravity field in fractured porous media with lower fracture storativity and/or higher fracture interporosity flow coefficient is more stable. We present scaling relations for the onset of convective instability in fractured aquifers with single and variable matrix block size distribution. These findings improve our understanding of density-driven flow in fractured aquifers and are important in the estimation of potential storage capacity, risk assessment, and storage site characterization and screening.

  18. Schistosomiasis Breeding Environment Situation Analysis in Dongting Lake Area

    NASA Astrophysics Data System (ADS)

    Li, Chuanrong; Jia, Yuanyuan; Ma, Lingling; Liu, Zhaoyan; Qian, Yonggang

    2013-01-01

    Monitoring environmental characteristics, such as vegetation, soil moisture et al., of Oncomelania hupensis (O. hupensis)’ spatial/temporal distribution is of vital importance to the schistosomiasis prevention and control. In this study, the relationship between environmental factors derived from remotely sensed data and the density of O. hupensis was analyzed by a multiple linear regression model. Secondly, spatial analysis of the regression residual was investigated by the semi-variogram method. Thirdly, spatial analysis of the regression residual and the multiple linear regression model were both employed to estimate the spatial variation of O. hupensis density. Finally, the approach was used to monitor and predict the spatial and temporal variations of oncomelania of Dongting Lake region, China. And the areas of potential O. hupensis habitats were predicted and the influence of Three Gorges Dam (TGB)project on the density of O. hupensis was analyzed.

  19. Modelling detectability of kiore (Rattus exulans) on Aguiguan, Mariana Islands, to inform possible eradication and monitoring efforts

    USGS Publications Warehouse

    Adams, A.A.Y.; Stanford, J.W.; Wiewel, A.S.; Rodda, G.H.

    2011-01-01

    Estimating the detection probability of introduced organisms during the pre-monitoring phase of an eradication effort can be extremely helpful in informing eradication and post-eradication monitoring efforts, but this step is rarely taken. We used data collected during 11 nights of mark-recapture sampling on Aguiguan, Mariana Islands, to estimate introduced kiore (Rattus exulans Peale) density and detection probability, and evaluated factors affecting detectability to help inform possible eradication efforts. Modelling of 62 captures of 48 individuals resulted in a model-averaged density estimate of 55 kiore/ha. Kiore detection probability was best explained by a model allowing neophobia to diminish linearly (i.e. capture probability increased linearly) until occasion 7, with additive effects of sex and cumulative rainfall over the prior 48 hours. Detection probability increased with increasing rainfall and females were up to three times more likely than males to be trapped. In this paper, we illustrate the type of information that can be obtained by modelling mark-recapture data collected during pre-eradication monitoring and discuss the potential of using these data to inform eradication and posteradication monitoring efforts. ?? New Zealand Ecological Society.

  20. Iterative initial condition reconstruction

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias

    2017-07-01

    Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.

  1. Conditional random slope: A new approach for estimating individual child growth velocity in epidemiological research.

    PubMed

    Leung, Michael; Bassani, Diego G; Racine-Poon, Amy; Goldenberg, Anna; Ali, Syed Asad; Kang, Gagandeep; Premkumar, Prasanna S; Roth, Daniel E

    2017-09-10

    Conditioning child growth measures on baseline accounts for regression to the mean (RTM). Here, we present the "conditional random slope" (CRS) model, based on a linear-mixed effects model that incorporates a baseline-time interaction term that can accommodate multiple data points for a child while also directly accounting for RTM. In two birth cohorts, we applied five approaches to estimate child growth velocities from 0 to 12 months to assess the effect of increasing data density (number of measures per child) on the magnitude of RTM of unconditional estimates, and the correlation and concordance between the CRS and four alternative metrics. Further, we demonstrated the differential effect of the choice of velocity metric on the magnitude of the association between infant growth and stunting at 2 years. RTM was minimally attenuated by increasing data density for unconditional growth modeling approaches. CRS and classical conditional models gave nearly identical estimates with two measures per child. Compared to the CRS estimates, unconditional metrics had moderate correlation (r = 0.65-0.91), but poor agreement in the classification of infants with relatively slow growth (kappa = 0.38-0.78). Estimates of the velocity-stunting association were the same for CRS and classical conditional models but differed substantially between conditional versus unconditional metrics. The CRS can leverage the flexibility of linear mixed models while addressing RTM in longitudinal analyses. © 2017 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc.

  2. Observation of Rayleigh-Taylor-instability evolution in a plasma with magnetic and viscous effects

    DOE PAGES

    Adams, Colin S.; Moser, Auna L.; Hsu, Scott C.

    2015-11-06

    We present time-resolved observations of Rayleigh-Taylor-instability (RTI) evolution at the interface between an unmagnetized plasma jet colliding with a stagnated, magnetized plasma. The observed instability growth time (~10μs) is consistent with the estimated linear RTI growth rate calculated using experimentally inferred values of density (~10 14cm–3) and deceleration (~10 9 m/s 2). The observed mode wavelength (≳1 cm) nearly doubles within a linear growth time. Furthermore, theoretical estimates of magnetic and viscous stabilization and idealized magnetohydrodynamic simulations including a physical viscosity model both suggest that the observed instability evolution is subject to magnetic and/or viscous effects.

  3. Two is better than one: joint statistics of density and velocity in concentric spheres as a cosmological probe

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Codis, S.; Hahn, O.; Pichon, C.; Bernardeau, F.

    2017-08-01

    The analytical formalism to obtain the probability distribution functions (PDFs) of spherically averaged cosmic densities and velocity divergences in the mildly non-linear regime is presented. A large-deviation principle is applied to those cosmic fields assuming their most likely dynamics in spheres is set by the spherical collapse model. We validate our analytical results using state-of-the-art dark matter simulations with a phase-space resolved velocity field finding a 2 per cent level agreement for a wide range of velocity divergences and densities in the mildly non-linear regime (˜10 Mpc h-1 at redshift zero), usually inaccessible to perturbation theory. From the joint PDF of densities and velocity divergences measured in two concentric spheres, we extract with the same accuracy velocity profiles and conditional velocity PDF subject to a given over/underdensity that are of interest to understand the non-linear evolution of velocity flows. Both PDFs are used to build a simple but accurate maximum likelihood estimator for the redshift evolution of the variance of both the density and velocity divergence fields, which have smaller relative errors than their sample variances when non-linearities appear. Given the dependence of the velocity divergence on the growth rate, there is a significant gain in using the full knowledge of both PDFs to derive constraints on the equation of state-of-dark energy. Thanks to the insensitivity of the velocity divergence to bias, its PDF can be used to obtain unbiased constraints on the growth of structures (σ8, f) or it can be combined with the galaxy density PDF to extract bias parameters.

  4. Cassini RSS occultation observations of density waves in Saturn's rings

    NASA Astrophysics Data System (ADS)

    McGhee, C. A.; French, R. G.; Marouf, E. A.; Rappaport, N. J.; Schinder, P. J.; Anabtawi, A.; Asmar, S.; Barbinis, E.; Fleischman, D.; Goltz, G.; Johnston, D.; Rochblatt, D.

    2005-08-01

    On May 3, 2005, the first of a series of eight nearly diametric occultations by Saturn's rings and atmosphere took place, observed by the Cassini Radio Science (RSS) team. Simultaneous high SNR measurements at the Deep Space Network (DSN) at S, X, and Ka bands (λ = 13, 3.6, and 0.9 cm) have provided a remarkably detailed look at the radial structure and particle scattering behavior of the rings. By virtue of the relatively large ring opening angle (B=-23.6o), the slant path optical depth of the rings was much lower than during the Voyager epoch (B=5.9o), making it possible to detect many density waves and other ring features in the Cassini RSS data that were lost in the noise in the Voyager RSS experiment. Ultimately, diffraction correction of the ring optical depth profiles will yield radial resolution as small as tens of meters for the highest SNR data. At Ka band, the Fresnel scale is only 1--1.5 km, and thus even without diffraction correction, the ring profiles show a stunning array of density waves. The A ring is replete with dozens of Pandora and Prometheus inner Lindblad resonance features, and the Janus 2:1 density wave in the B ring is revealed with exceptional clarity for the first time at radio wavelengths. Weaker waves are abundant as well, and multiple occultation chords sample a variety of wave phases. We estimate the surface mass density of the rings from linear density wave models of the weaker waves. For stronger waves, non-linear models are required, providing more accurate estimates of the wave dispersion relation, the ring surface mass density, and the angular momentum exchange between the rings and satellite. We thank the DSN staff for their superb support of these complex observations.

  5. Geometric characterization and simulation of planar layered elastomeric fibrous biomaterials

    DOE PAGES

    Carleton, James B.; D’Amore, Antonio; Feaver, Kristen R.; ...

    2014-10-13

    Many important biomaterials are composed of multiple layers of networked fibers. While there is a growing interest in modeling and simulation of the mechanical response of these biomaterials, a theoretical foundation for such simulations has yet to be firmly established. Moreover, correctly identifying and matching key geometric features is a critically important first step for performing reliable mechanical simulations. This paper addresses these issues in two ways. First, using methods of geometric probability, we develop theoretical estimates for the mean linear and areal fiber intersection densities for 2-D fibrous networks. These densities are expressed in terms of the fiber densitymore » and the orientation distribution function, both of which are relatively easy-to-measure properties. Secondly, we develop a random walk algorithm for geometric simulation of 2-D fibrous networks which can accurately reproduce the prescribed fiber density and orientation distribution function. Furthermore, the linear and areal fiber intersection densities obtained with the algorithm are in agreement with the theoretical estimates. Both theoretical and computational results are compared with those obtained by post-processing of scanning electron microscope images of actual scaffolds. These comparisons reveal difficulties inherent to resolving fine details of multilayered fibrous networks. Finally, the methods provided herein can provide a rational means to define and generate key geometric features from experimentally measured or prescribed scaffold structural data.« less

  6. Hydrogen bonding between nitriles and hydrogen halides and the topological properties of molecular charge distributions

    NASA Astrophysics Data System (ADS)

    Boyd, Russell J.; Choi, Sai Cheng

    1986-08-01

    The topological properties of the charge density of the hydrogen-bonded complexes between nitrites and hydrogen chloride correlate linearly with theoretical estimates of the hydrogen-bond energy. At the 6-31G ** level, the hydrogenbond energies range from a low of 10 kJ/mol m NCCN—HC1 to a high of 38 kJ/mol in LiCN—HCl. A linear relationship between the charge density at the hydrogen-bond critical point and the NH internuclear distance of the RCN—HC1 complexes indicates that the generalization of the bond-length-bond-order relationship of CC bonds due to Bader, Tang, Tal and Biegler-König can be extended to intermolecular hydrogen bonding.

  7. Density scaling on n  =  1 error field penetration in ohmically heated discharges in EAST

    NASA Astrophysics Data System (ADS)

    Wang, Hui-Hui; Sun, You-Wen; Shi, Tong-Hui; Zang, Qing; Liu, Yue-Qiang; Yang, Xu; Gu, Shuai; He, Kai-Yang; Gu, Xiang; Qian, Jin-Ping; Shen, Biao; Luo, Zheng-Ping; Chu, Nan; Jia, Man-Ni; Sheng, Zhi-Cai; Liu, Hai-Qing; Gong, Xian-Zu; Wan, Bao-Nian; Contributors, EAST

    2018-05-01

    Density scaling of error field penetration in EAST is investigated with different n  =  1 magnetic perturbation coil configurations in ohmically heated discharges. The density scalings of error field penetration thresholds under two magnetic perturbation spectra are br\\propto n_e0.5 and br\\propto n_e0.6 , where b r is the error field and n e is the line averaged electron density. One difficulty in understanding the density scaling is that key parameters other than density in determining the field penetration process may also be changed when the plasma density changes. Therefore, they should be determined from experiments. The estimated theoretical analysis (br\\propto n_e0.54 in lower density region and br\\propto n_e0.40 in higher density region), using the density dependence of viscosity diffusion time, electron temperature and mode frequency measured from the experiments, is consistent with the observed scaling. One of the key points to reproduce the observed scaling in EAST is that the viscosity diffusion time estimated from energy confinement time is almost constant. It means that the plasma confinement lies in saturation ohmic confinement regime rather than the linear Neo-Alcator regime causing weak density dependence in the previous theoretical studies.

  8. Statistical guides to estimating the number of undiscovered mineral deposits: an example with porphyry copper deposits

    USGS Publications Warehouse

    Singer, Donald A.; Menzie, W.D.; Cheng, Qiuming; Bonham-Carter, G. F.

    2005-01-01

    Estimating numbers of undiscovered mineral deposits is a fundamental part of assessing mineral resources. Some statistical tools can act as guides to low variance, unbiased estimates of the number of deposits. The primary guide is that the estimates must be consistent with the grade and tonnage models. Another statistical guide is the deposit density (i.e., the number of deposits per unit area of permissive rock in well-explored control areas). Preliminary estimates and confidence limits of the number of undiscovered deposits in a tract of given area may be calculated using linear regression and refined using frequency distributions with appropriate parameters. A Poisson distribution leads to estimates having lower relative variances than the regression estimates and implies a random distribution of deposits. Coefficients of variation are used to compare uncertainties of negative binomial, Poisson, or MARK3 empirical distributions that have the same expected number of deposits as the deposit density. Statistical guides presented here allow simple yet robust estimation of the number of undiscovered deposits in permissive terranes. 

  9. Model-based estimation of breast percent density in raw and processed full-field digital mammography images from image-acquisition physics and patient-image characteristics

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina

    2012-03-01

    Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.

  10. Transport coefficients of hard-sphere mixtures. II. Diameter ratio 0. 4 and mass ratio 0. 03 at low density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erpenbeck, J.J.

    1992-02-15

    The transport coefficients of shear viscosity, thermal conductivity, thermal diffusion, and mutual diffusion are estimated for a binary, equimolar mixture of hard spheres having a diameter ratio of 0.4 and a mass ratio of 0.03 at volumes of 5{ital V}{sub 0}, 10{ital V}{sub 0}, and 20{ital V}{sub 0} (where {ital V}{sub 0}=1/2 {radical}2 {ital N} {ital tsum}{sub {ital a}} x{sub {ital a}}{sigma}{sub {ital a}}{sup 3}, {ital x}{sub {ital a}} are mole fractions, {sigma}{sub {ital a}} are diameters, and {ital N} is the number of particles) through Monte Carlo, molecular-dynamics calculations using the Green-Kubo formulas. Calculations are reported for as fewmore » as 108 and as many as 4000 particles, but not for each value of the volume. Both finite-system and long-time-tail corrections are applied to obtain estimates of the transport coefficients in the thermodynamic limit; corrections of both types are found to be small. The results are compared with the predictions of the revised Enskog theory and the linear density corrections to that theory are reported. The mean free time is also computed as a function of density and the linear and quadratic corrections to the Boltzmann theory are estimated. The mean free time is also compared with the expression from the Mansoori-Carnahan-Starling-Leland equation of state.« less

  11. Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio

    We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex  = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less

  12. Lightning energetics: Estimates of energy dissipation in channels, channel radii, and channel-heating risetimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borovsky, J.E.

    1998-05-01

    In this report, several lightning-channel parameters are calculated with the aid of an electrodynamic model of lightning. The electrodynamic model describes dart leaders and return strokes as electromagnetic waves that are guided along conducting lightning channels. According to the model, electrostatic energy is delivered to the channel by a leader, where it is stored around the outside of the channel; subsequently, the return stroke dissipates this locally stored energy. In this report this lightning-energy-flow scenario is developed further. Then the energy dissipated per unit length in lightning channels is calculated, where this quantity is now related to the linear chargemore » density on the channel, not to the cloud-to-ground electrostatic potential difference. Energy conservation is then used to calculate the radii of lightning channels: their initial radii at the onset of return strokes and their final radii after the channels have pressure expanded. Finally, the risetimes for channel heating during return strokes are calculated by defining an energy-storage radius around the channel and by estimating the radial velocity of energy flow toward the channel during a return stroke. In three appendices, values for the linear charge densities on lightning channels are calculated, estimates of the total length of branch channels are obtained, and values for the cloud-to-ground electrostatic potential difference are estimated. {copyright} 1998 American Geophysical Union« less

  13. Implicit filtered P{sub N} for high-energy density thermal radiation transport using discontinuous Galerkin finite elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laboure, Vincent M., E-mail: vincent.laboure@tamu.edu; McClarren, Ryan G., E-mail: rgm@tamu.edu; Hauck, Cory D., E-mail: hauckc@ornl.gov

    2016-09-15

    In this work, we provide a fully-implicit implementation of the time-dependent, filtered spherical harmonics (FP{sub N}) equations for non-linear, thermal radiative transfer. We investigate local filtering strategies and analyze the effect of the filter on the conditioning of the system, showing in particular that the filter improves the convergence properties of the iterative solver. We also investigate numerically the rigorous error estimates derived in the linear setting, to determine whether they hold also for the non-linear case. Finally, we simulate a standard test problem on an unstructured mesh and make comparisons with implicit Monte Carlo (IMC) calculations.

  14. Estimation of Mesospheric Densities at Low Latitudes Using the Kunming Meteor Radar Together With SABER Temperatures

    NASA Astrophysics Data System (ADS)

    Yi, Wen; Xue, Xianghui; Reid, Iain M.; Younger, Joel P.; Chen, Jinsong; Chen, Tingdi; Li, Na

    2018-04-01

    Neutral mesospheric densities at a low latitude have been derived during April 2011 to December 2014 using data from the Kunming meteor radar in China (25.6°N, 103.8°E). The daily mean density at 90 km was estimated using the ambipolar diffusion coefficients from the meteor radar and temperatures from the Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument. The seasonal variations of the meteor radar-derived density are consistent with the density from the Mass Spectrometer and Incoherent Scatter (MSIS) model, show a dominant annual variation, with a maximum during winter, and a minimum during summer. A simple linear model was used to separate the effects of atmospheric density and the meteor velocity on the meteor radar peak detection height. We find that a 1 km/s difference in the vertical meteor velocity yields a change of approximately 0.42 km in peak height. The strong correlation between the meteor radar density and the velocity-corrected peak height indicates that the meteor radar density estimates accurately reflect changes in neutral atmospheric density and that meteor peak detection heights, when adjusted for meteoroid velocity, can serve as a convenient tool for measuring density variations around the mesopause. A comparison of the ambipolar diffusion coefficient and peak height observed simultaneously by two co-located meteor radars indicates that the relative errors of the daily mean ambipolar diffusion coefficient and peak height should be less than 5% and 6%, respectively, and that the absolute error of the peak height is less than 0.2 km.

  15. Bose Condensation at He-4 Interfaces

    NASA Technical Reports Server (NTRS)

    Draeger, E. W.; Ceperley, D. M.

    2003-01-01

    Path Integral Monte Carlo was used to calculate the Bose-Einstein condensate fraction at the surface of a helium film at T = 0:77 K, as a function of density. Moving from the center of the slab to the surface, the condensate fraction was found to initially increase with decreasing density to a maximum value of 0.9, before decreasing. Long wavelength density correlations were observed in the static structure factor at the surface of the slab. A surface dispersion relation was calculated from imaginary-time density-density correlations. Similar calculations of the superfluid density throughout He-4 droplets doped with linear impurities (HCN)(sub n) are presented. After deriving a local estimator for the superfluid density distribution, we find a decreased superfluid response in the first solvation layer. This effective normal fluid exhibits temperature dependence similar to that of a two-dimensional helium system.

  16. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  17. Functional differentiability in time-dependent quantum mechanics.

    PubMed

    Penz, Markus; Ruggenthaler, Michael

    2015-03-28

    In this work, we investigate the functional differentiability of the time-dependent many-body wave function and of derived quantities with respect to time-dependent potentials. For properly chosen Banach spaces of potentials and wave functions, Fréchet differentiability is proven. From this follows an estimate for the difference of two solutions to the time-dependent Schrödinger equation that evolve under the influence of different potentials. Such results can be applied directly to the one-particle density and to bounded operators, and present a rigorous formulation of non-equilibrium linear-response theory where the usual Lehmann representation of the linear-response kernel is not valid. Further, the Fréchet differentiability of the wave function provides a new route towards proving basic properties of time-dependent density-functional theory.

  18. On the Correlation Between Biomass and the P-Band Polarisation Phase Difference, and Its Potential for Biomass and Tree Number Density Estimation

    NASA Astrophysics Data System (ADS)

    Soja, Maciej J.; Blomberg, Erik; Ulander, Lars M. H.

    2015-04-01

    In this paper, a significant correlation between the HH/VV phase difference (polarisation phase difference, PPD) and the above-ground biomass (AGB) is observed for incidence angles above 30° in airborne P-band SAR data acquired over two boreal test sites in Sweden. A geometric model is used to explain the dependence of the AGB on tree height, stem radius, and tree number density, whereas a cylinder-over-ground model is used to explain the dependence of the PPD on the same three forest parameters. The models show that forest anisotropy need to be accounted for at P-band in order to obtain a linear relationship between the PPD and the AGB. An approach to the estimation of tree number density is proposed, based on a comparison between the modelled and observed PPDs.

  19. A cost-efficient method to assess carbon stocks in tropical peat soil

    NASA Astrophysics Data System (ADS)

    Warren, M. W.; Kauffman, J. B.; Murdiyarso, D.; Anshari, G.; Hergoualc'h, K.; Kurnianto, S.; Purbopuspito, J.; Gusmayanti, E.; Afifudin, M.; Rahajoe, J.; Alhamd, L.; Limin, S.; Iswandi, A.

    2012-11-01

    Estimation of belowground carbon stocks in tropical wetland forests requires funding for laboratory analyses and suitable facilities, which are often lacking in developing nations where most tropical wetlands are found. It is therefore beneficial to develop simple analytical tools to assist belowground carbon estimation where financial and technical limitations are common. Here we use published and original data to describe soil carbon density (kgC m-3; Cd) as a function of bulk density (gC cm-3; Bd), which can be used to rapidly estimate belowground carbon storage using Bd measurements only. Predicted carbon densities and stocks are compared with those obtained from direct carbon analysis for ten peat swamp forest stands in three national parks of Indonesia. Analysis of soil carbon density and bulk density from the literature indicated a strong linear relationship (Cd = Bd × 495.14 + 5.41, R2 = 0.93, n = 151) for soils with organic C content > 40%. As organic C content decreases, the relationship between Cd and Bd becomes less predictable as soil texture becomes an important determinant of Cd. The equation predicted belowground C stocks to within 0.92% to 9.57% of observed values. Average bulk density of collected peat samples was 0.127 g cm-3, which is in the upper range of previous reports for Southeast Asian peatlands. When original data were included, the revised equation Cd = Bd × 468.76 + 5.82, with R2 = 0.95 and n = 712, was slightly below the lower 95% confidence interval of the original equation, and tended to decrease Cd estimates. We recommend this last equation for a rapid estimation of soil C stocks for well-developed peat soils where C content > 40%.

  20. Estimating stand structure using discrete-return lidar: an example from low density, fire prone ponderosa pine forests

    USGS Publications Warehouse

    Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.

    2005-01-01

    The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important, such as fire prone forests.

  1. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    DOE PAGES

    Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...

    2015-12-10

    We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less

  2. Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin

    2016-10-01

    An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.

  3. Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals

    USGS Publications Warehouse

    Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew

    2011-01-01

    Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.

  4. Stable Spheromaks with Profile Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, T K; Jayakumar, R

    A spheromak equilibrium with zero edge current is shown to be stable to both ideal MHD and tearing modes that normally produce Taylor relaxation in gun-injected spheromaks. This stable equilibrium differs from the stable Taylor state in that the current density j falls to zero at the wall. Estimates indicate that this current profile could be sustained by non-inductive current drive at acceptable power levels. Stability is determined using the NIMROD code for linear stability analysis. Non-linear NIMROD calculations with non-inductive current drive could point the way to improved fusion reactors.

  5. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  6. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  7. Birds and insects as radar targets - A review

    NASA Technical Reports Server (NTRS)

    Vaughn, C. R.

    1985-01-01

    A review of radar cross-section measurements of birds and insects is presented. A brief discussion of some possible theoretical models is also given and comparisons made with the measurements. The comparisons suggest that most targets are, at present, better modeled by a prolate spheroid having a length-to-width ratio between 3 and 10 than by the often used equivalent weight water sphere. In addition, many targets observed with linear horizontal polarization have maximum cross sections much better estimated by a resonant half-wave dipole than by a water sphere. Also considered are birds and insects in the aggregate as a local radar 'clutter' source. Order-of-magnitude estimates are given for many reasonable target number densities. These estimates are then used to predict X-band volume reflectivities. Other topics that are of interest to the radar engineer are discussed, including the doppler bandwidth due to the internal motions of a single bird, the radar cross-section probability densities of single birds and insects, the variability of the functional form of the probability density functions, and the Fourier spectra of single birds and insects.

  8. Estimates of evapotranspiration in alkaline scrub and meadow communities of Owens Valley, California, using the Bowen-ratio, eddy-correlation, and Penman-combination methods

    USGS Publications Warehouse

    Duell, L. F. W.

    1988-01-01

    In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared to other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy budget measurements. Penman-combination potential ET estimates were determined to be unusable because they overestimated actual ET. Modification in the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 300 mm at a low-density scrub site to 1,100 mm at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied. (Author 's abstract)

  9. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  10. Parenchymal Texture Analysis in Digital Breast Tomosynthesis for Breast Cancer Risk Estimation: A Preliminary Study

    PubMed Central

    Kontos, Despina; Bakic, Predrag R.; Carton, Ann-Katherine; Troxel, Andrea B.; Conant, Emily F.; Maidment, Andrew D.A.

    2009-01-01

    Rationale and Objectives Studies have demonstrated a relationship between mammographic parenchymal texture and breast cancer risk. Although promising, texture analysis in mammograms is limited by tissue superimposition. Digital breast tomosynthesis (DBT) is a novel tomographic x-ray breast imaging modality that alleviates the effect of tissue superimposition, offering superior parenchymal texture visualization compared to mammography. Our study investigates the potential advantages of DBT parenchymal texture analysis for breast cancer risk estimation. Materials and Methods DBT and digital mammography (DM) images of 39 women were analyzed. Texture features, shown in studies with mammograms to correlate with cancer risk, were computed from the retroareolar breast region. We compared the relative performance of DBT and DM texture features in correlating with two measures of breast cancer risk: (i) the Gail and Claus risk estimates, and (ii) mammographic breast density. Linear regression was performed to model the association between texture features and increasing levels of risk. Results No significant correlation was detected between parenchymal texture and the Gail and Claus risk estimates. Significant correlations were observed between texture features and breast density. Overall, the DBT texture features demonstrated stronger correlations with breast percent density (PD) than DM (p ≤0.05). When dividing our study population in groups of increasing breast PD, the DBT texture features appeared to be more discriminative, having regression lines with overall lower p-values, steeper slopes, and higher R2 estimates. Conclusion Although preliminary, our results suggest that DBT parenchymal texture analysis could provide more accurate characterization of breast density patterns, which could ultimately improve breast cancer risk estimation. PMID:19201357

  11. Characterization of the Earwig, Doru lineare, as a Predator of Larvae of the Fall Armyworm, Spodoptera frugiperda: A Functional Response Study

    PubMed Central

    Sueldo, Mabel Romero; Bruzzone, Octavio A.; Virla, Eduardo G.

    2010-01-01

    Spodoptera frugiperda Smith (Lepidoptera: Noctuidae) is considered as the most important pest of maize in almost all tropical America. In Argentina, the earwig Doru lineare Eschscholtz (Dermaptera: Forficulidae) has been observed preying on S. frugiperda egg masses in corn crops, but no data about its potential role as a biocontrol agent of this pest have been provided. The predation efficiency of D. lineare on newly emerged S. frugiperda larva was evaluated through a laboratory functional response study. D. lineare showed type II functional response to S. frugiperda larval density, and disc equation estimations of searching efficiency and handling time were (a) = 0.374 and (t) = 182.9 s, respectively. Earwig satiation occurred at 39.4 S. frugiperda larvae. PMID:20575739

  12. Improving CTIPe neutral density response and recovery during geomagnetic storms

    NASA Astrophysics Data System (ADS)

    Fedrizzi, M.; Fuller-Rowell, T. J.; Codrescu, M.; Mlynczak, M. G.; Marsh, D. R.

    2013-12-01

    The temperature of the Earth's thermosphere can be substantially increased during geomagnetic storms mainly due to high-latitude Joule heating induced by magnetospheric convection and auroral particle precipitation. Thermospheric heating increases atmospheric density and the drag on low-Earth orbiting satellites. The main cooling mechanism controlling the recovery of neutral temperature and density following geomagnetic activity is infrared emission from nitric oxide (NO) at 5.3 micrometers. NO is produced by both solar and auroral activity, the first due to solar EUV and X-rays the second due to dissociation of N2 by particle precipitation, and has a typical lifetime of 12 to 24 hours in the mid and lower thermosphere. NO cooling in the thermosphere peaks between 150 and 200 km altitude. In this study, a global, three-dimensional, time-dependent, non-linear coupled model of the thermosphere, ionosphere, plasmasphere, and electrodynamics (CTIPe) is used to simulate the response and recovery timescales of the upper atmosphere following geomagnetic activity. CTIPe uses time-dependent estimates of NO obtained from Marsh et al. [2004] empirical model based on Student Nitric Oxide Explorer (SNOE) satellite data rather than solving for minor species photochemistry self-consistently. This empirical model is based solely on SNOE observations, when Kp rarely exceeded 5. During conditions between Kp 5 and 9, a linear extrapolation has been used. In order to improve the accuracy of the extrapolation algorithm, CTIPe model estimates of global NO cooling have been compared with the NASA TIMED/SABER satellite measurements of radiative power at 5.3 micrometers. The comparisons have enabled improvement in the timescale for neutral density response and recovery during geomagnetic storms. CTIPe neutral density response and recovery rates are verified by comparison CHAMP satellite observations.

  13. A new gridded on-road CO2 emissions inventory for the United States, 1980-2011

    NASA Astrophysics Data System (ADS)

    Gately, C.; Hutyra, L.; Sue Wing, I.

    2013-12-01

    On-road transportation is responsible for 28% of all U.S. fossil fuel CO2 emissions. However, mapping vehicle emissions at regional scales is challenging due to data limitations. Existing emission inventories have used spatial proxies such as population and road density to downscale national or state level data, which may introduce errors where the proxy variables and actual emissions are weakly correlated. We have developed a national on-road emissions inventory product based on roadway-level traffic data obtained from the Highway Performance Monitoring System. We produce annual estimates of on-road CO2 emissions at a 1km spatial resolution for the contiguous United States for the years 1980 through 2011. For the year 2011 we also produce an hourly emissions product at the 1km scale using hourly traffic volumes from hundreds of automated traffic counters across the country. National on-road emissions rose at roughly 2% per year from 1980 to 2006, with emissions peaking at 1.71 Tg CO2 in 2007. However, while national emissions have declined 6% since the peak, we observe considerable regional variation in emissions trends post-2007. While many states show stable or declining on-road emissions, several states and metropolitan areas in the Midwest, mountain west and south had emissions increases of 3-10% from 2008 to 2011. Our emissions estimates are consistent with state-reported totals of gasoline and diesel fuel consumption. This is in contrast to on-road CO2 emissions estimated by the Emissions Database of Global Atmospheric Research (EDGAR), which we show to be inconsistent in matching on-road emissions to published fuel consumption at the scale of U.S. states, due to the non-linear relationships between emissions and EDGAR's chosen spatial proxies at these scales. Since our emissions estimates were generated independent of population density and other demographic data, we were able to conduct a panel regression analysis to estimate the relationship between these variables and on-road CO2 at various spatial scales. In the case of Massachusetts we find a non-linear relationship between emissions and population density indicating that increasing density resulted in increased emissions when density is less than 2000 persons-km-2. These results highlight the value of using an emissions inventory with high spatial and temporal resolution. At coarser spatial scales, much of the variation in population density and on-road emissions between towns is lost due to aggregation. The high spatial resolution and broad temporal scope of our CO2 estimates provides a basis for analyses to support emissions monitoring, verification and mitigation policies at regional, state and local scale.

  14. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata

    PubMed Central

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-01-01

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664

  15. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.

    PubMed

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-03-29

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.

  16. Lidar-Based Estimates of Above-Ground Biomass in the Continental US and Mexico Using Ground, Airborne, and Satellite Observations

    NASA Technical Reports Server (NTRS)

    Nelson, Ross; Margolis, Hank; Montesano, Paul; Sun, Guoqing; Cook, Bruce; Corp, Larry; Andersen, Hans-Erik; DeJong, Ben; Pellat, Fernando Paz; Fickel, Thaddeus; hide

    2016-01-01

    Existing national forest inventory plots, an airborne lidar scanning (ALS) system, and a space profiling lidar system (ICESat-GLAS) are used to generate circa 2005 estimates of total aboveground dry biomass (AGB) in forest strata, by state, in the continental United States (CONUS) and Mexico. The airborne lidar is used to link ground observations of AGB to space lidar measurements. Two sets of models are generated, the first relating ground estimates of AGB to airborne laser scanning (ALS) measurements and the second set relating ALS estimates of AGB (generated using the first model set) to GLAS measurements. GLAS then, is used as a sampling tool within a hybrid estimation framework to generate stratum-, state-, and national-level AGB estimates. A two-phase variance estimator is employed to quantify GLAS sampling variability and, additively, ALS-GLAS model variability in this current, three-phase (ground-ALS-space lidar) study. The model variance component characterizes the variability of the regression coefficients used to predict ALS-based estimates of biomass as a function of GLAS measurements. Three different types of predictive models are considered in CONUS to determine which produced biomass totals closest to ground-based national forest inventory estimates - (1) linear (LIN), (2) linear-no-intercept (LNI), and (3) log-linear. For CONUS at the national level, the GLAS LNI model estimate (23.95 +/- 0.45 Gt AGB), agreed most closely with the US national forest inventory ground estimate, 24.17 +/- 0.06 Gt, i.e., within 1%. The national biomass total based on linear ground-ALS and ALS-GLAS models (25.87 +/- 0.49 Gt) overestimated the national ground-based estimate by 7.5%. The comparable log-linear model result (63.29 +/-1.36 Gt) overestimated ground results by 261%. All three national biomass GLAS estimates, LIN, LNI, and log-linear, are based on 241,718 pulses collected on 230 orbits. The US national forest inventory (ground) estimates are based on 119,414 ground plots. At the US state level, the average absolute value of the deviation of LNI GLAS estimates from the comparable ground estimate of total biomass was 18.8% (range: Oregon,-40.8% to North Dakota, 128.6%). Log-linear models produced gross overestimates in the continental US, i.e., N2.6x, and the use of this model to predict regional biomass using GLAS data in temperate, western hemisphere forests is not appropriate. The best model form, LNI, is used to produce biomass estimates in Mexico. The average biomass density in Mexican forests is 53.10 +/- 0.88 t/ha, and the total biomass for the country, given a total forest area of 688,096 sq km, is 3.65 +/- 0.06 Gt. In Mexico, our GLAS biomass total underestimated a 2005 FAO estimate (4.152 Gt) by 12% and overestimated a 2007/8 radar study's figure (3.06 Gt) by 19%.

  17. Multi-species genetic connectivity in a terrestrial habitat network.

    PubMed

    Marrotte, Robby R; Bowman, Jeff; Brown, Michael G C; Cordes, Chad; Morris, Kimberley Y; Prentice, Melanie B; Wilson, Paul J

    2017-01-01

    Habitat fragmentation reduces genetic connectivity for multiple species, yet conservation efforts tend to rely heavily on single-species connectivity estimates to inform land-use planning. Such conservation activities may benefit from multi-species connectivity estimates, which provide a simple and practical means to mitigate the effects of habitat fragmentation for a larger number of species. To test the validity of a multi-species connectivity model, we used neutral microsatellite genetic datasets of Canada lynx ( Lynx canadensis ), American marten ( Martes americana ), fisher ( Pekania pennanti ), and southern flying squirrel ( Glaucomys volans ) to evaluate multi-species genetic connectivity across Ontario, Canada. We used linear models to compare node-based estimates of genetic connectivity for each species to point-based estimates of landscape connectivity (current density) derived from circuit theory. To our knowledge, we are the first to evaluate current density as a measure of genetic connectivity. Our results depended on landscape context: habitat amount was more important than current density in explaining multi-species genetic connectivity in the northern part of our study area, where habitat was abundant and fragmentation was low. In the south however, where fragmentation was prevalent, genetic connectivity was correlated with current density. Contrary to our expectations however, locations with a high probability of movement as reflected by high current density were negatively associated with gene flow. Subsequent analyses of circuit theory outputs showed that high current density was also associated with high effective resistance, underscoring that the presence of pinch points is not necessarily indicative of gene flow. Overall, our study appears to provide support for the hypothesis that landscape pattern is important when habitat amount is low. We also conclude that while current density is proportional to the probability of movement per unit area, this does not imply increased gene flow, since high current density tends to be a result of neighbouring pixels with high cost of movement (e.g., low habitat amount). In other words, pinch points with high current density appear to constrict gene flow.

  18. On the minimum quantum requirement of photosynthesis.

    PubMed

    Zeinalov, Yuzeir

    2009-01-01

    An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.

  19. On the estimation of the current density in space plasmas: Multi- versus single-point techniques

    NASA Astrophysics Data System (ADS)

    Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco

    2017-06-01

    Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.

  20. Monte Carlo simulation of hard spheres near random closest packing using spherical boundary conditions

    NASA Astrophysics Data System (ADS)

    Tobochnik, Jan; Chapin, Phillip M.

    1988-05-01

    Monte Carlo simulations were performed for hard disks on the surface of an ordinary sphere and hard spheres on the surface of a four-dimensional hypersphere. Starting from the low density fluid the density was increased to obtain metastable amorphous states at densities higher than previously achieved. Above the freezing density the inverse pressure decreases linearly with density, reaching zero at packing fractions equal to 68% for hard spheres and 84% for hard disks. Using these new estimates for random closest packing and coefficients from the virial series we obtain an equation of state which fits all the data up to random closest packing. Usually, the radial distribution function showed the typical split second peak characteristic of amorphous solids and glasses. High density systems which lacked this split second peak and showed other sharp peaks were interpreted as signaling the onset of crystal nucleation.

  1. Direct Importance Estimation with Gaussian Mixture Models

    NASA Astrophysics Data System (ADS)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  2. Strain effect on the adsorption, diffusion, and molecular dissociation of hydrogen on Mg (0001) surface

    NASA Astrophysics Data System (ADS)

    Lei, Huaping; Wang, Caizhuang; Yao, Yongxin; Wang, Yangang; Hupalo, Myron; McDougall, Dan; Tringides, Michael; Ho, Kaiming

    2013-12-01

    The adsorption, diffusion, and molecular dissociation of hydrogen on the biaxially strained Mg (0001) surface have been systematically investigated by the first principle calculations based on density functional theory. When the strain changes from the compressive to tensile state, the adsorption energy of H atom linearly increases while its diffusion barrier linearly decreases oppositely. The dissociation barrier of H2 molecule linearly reduces in the tensile strain region. Through the chemical bonding analysis including the charge density difference, the projected density of states and the Mulliken population, the mechanism of the strain effect on the adsorption of H atom and the dissociation of H2 molecule has been elucidated by an s-p charge transfer model. With the reduction of the orbital overlap between the surface Mg atoms upon the lattice expansion, the charge transfers from p to s states of Mg atoms, which enhances the hybridization of H s and Mg s orbitals. Therefore, the bonding interaction of H with Mg surface is strengthened and then the atomic diffusion and molecular dissociation barriers of hydrogen decrease accordingly. Our works will be helpful to understand and to estimate the influence of the lattice deformation on the performance of Mg-containing hydrogen storage materials.

  3. Reduced density gradient as a novel approach for estimating QSAR descriptors, and its application to 1, 4-dihydropyridine derivatives with potential antihypertensive effects.

    PubMed

    Jardínez, Christiaan; Vela, Alberto; Cruz-Borbolla, Julián; Alvarez-Mendez, Rodrigo J; Alvarado-Rodríguez, José G

    2016-12-01

    The relationship between the chemical structure and biological activity (log IC 50 ) of 40 derivatives of 1,4-dihydropyridines (DHPs) was studied using density functional theory (DFT) and multiple linear regression analysis methods. With the aim of improving the quantitative structure-activity relationship (QSAR) model, the reduced density gradient s( r) of the optimized equilibrium geometries was used as a descriptor to include weak non-covalent interactions. The QSAR model highlights the correlation between the log IC 50 with highest molecular orbital energy (E HOMO ), molecular volume (V), partition coefficient (log P), non-covalent interactions NCI(H4-G) and the dual descriptor [Δf(r)]. The model yielded values of R 2 =79.57 and Q 2 =69.67 that were validated with the next four internal analytical validations DK=0.076, DQ=-0.006, R P =0.056, and R N =0.000, and the external validation Q 2 boot =64.26. The QSAR model found can be used to estimate biological activity with high reliability in new compounds based on a DHP series. Graphical abstract The good correlation between the log IC 50 with the NCI (H4-G) estimated by the reduced density gradient approach of the DHP derivatives.

  4. Recent work on material interface reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosso, S.J.; Swartz, B.K.

    1997-12-31

    For the last 15 years, many Eulerian codes have relied on a series of piecewise linear interface reconstruction algorithms developed by David Youngs. In a typical Youngs` method, the material interfaces were reconstructed based upon nearly cell values of volume fractions of each material. The interfaces were locally represented by linear segments in two dimensions and by pieces of planes in three dimensions. The first step in such reconstruction was to locally approximate an interface normal. In Youngs` 3D method, a local gradient of a cell-volume-fraction function was estimated and taken to be the local interface normal. A linear interfacemore » was moved perpendicular to the now known normal until the mass behind it matched the material volume fraction for the cell in question. But for distorted or nonorthogonal meshes, the gradient normal estimate didn`t accurately match that of linear material interfaces. Moreover, curved material interfaces were also poorly represented. The authors will present some recent work in the computation of more accurate interface normals, without necessarily increasing stencil size. Their estimate of the normal is made using an iterative process that, given mass fractions for nearby cells of known but arbitrary variable density, converges in 3 or 4 passes in practice (and quadratically--like Newton`s method--in principle). The method reproduces a linear interface in both orthogonal and nonorthogonal meshes. The local linear approximation is generally 2nd-order accurate, with a 1st-order accurate normal for curved interfaces in both two and three dimensional polyhedral meshes. Recent work demonstrating the interface reconstruction for curved surfaces will /be discussed.« less

  5. A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars

    NASA Astrophysics Data System (ADS)

    Zou, Hong; Ye, Yu Guang; Wang, Jin Song; Nielsen, Erling; Cui, Jun; Wang, Xiao Dong

    2016-04-01

    A method to estimate the neutral atmospheric density near the ionospheric main peak of Mars is introduced in this study. The neutral densities at 130 km can be derived from the ionospheric and atmospheric measurements of the Radio Science experiment on board Mars Global Surveyor (MGS). The derived neutral densities cover a large longitude range in northern high latitudes from summer to late autumn during 3 Martian years, which fills the gap of the previous observations for the upper atmosphere of Mars. The simulations of the Laboratoire de Météorologie Dynamique Mars global circulation model can be corrected with a simple linear equation to fit the neutral densities derived from the first MGS/RS (Radio Science) data sets (EDS1). The corrected simulations with the same correction parameters as for EDS1 match the derived neutral densities from two other MGS/RS data sets (EDS2 and EDS3) very well. The derived neutral density from EDS3 shows a dust storm effect, which is in accord with the Mars Express (MEX) Spectroscopy for Investigation of Characteristics of the Atmosphere of Mars measurement. The neutral density derived from the MGS/RS measurements can be used to validate the Martian atmospheric models. The method presented in this study can be applied to other radio occultation measurements, such as the result of the Radio Science experiment on board MEX.

  6. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, L.; Gu, H.

    2017-12-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.

  7. The importance of vegetation density for tourists' wildlife viewing experience and satisfaction in African savannah ecosystems.

    PubMed

    Arbieu, Ugo; Grünewald, Claudia; Schleuning, Matthias; Böhning-Gaese, Katrin

    2017-01-01

    Southern African protected areas (PAs) harbour a great diversity of animals, which represent a large potential for wildlife tourism. In this region, global change is expected to result in vegetation changes, such as bush encroachment and increases in vegetation density. However, little is known on the influence of vegetation structure on wildlife tourists' wildlife viewing experience and satisfaction. In this study, we collected data on vegetation structure and perceived mammal densities along 196 road transects (each 5 km long) and conducted a social survey with 651 questionnaires across four PAs in three Southern African countries. Our objectives were 1) to assess visitors' attitude towards vegetation, 2) to test the influence of perceived mammal density and vegetation structure on the easiness to spot animals, and 3) on visitors' satisfaction during their visit to PAs. Using a Boosted Regression Tree procedure, we found mostly negative non-linear relationships between vegetation density and wildlife tourists' experience, and positive relationships between perceived mammal densities and wildlife tourists' experience. In particular, wildlife tourists disliked road transects with high estimates of vegetation density. Similarly, the easiness to spot animals dropped at thresholds of high vegetation density and at perceived mammal densities lower than 46 individuals per road transect. Finally, tourists' satisfaction declined linearly with vegetation density and dropped at mammal densities smaller than 26 individuals per transect. Our results suggest that vegetation density has important impacts on tourists' wildlife viewing experience and satisfaction. Hence, the management of PAs in savannah landscapes should consider how tourists perceive these landscapes and their mammal diversity in order to maintain and develop a sustainable wildlife tourism.

  8. The importance of vegetation density for tourists’ wildlife viewing experience and satisfaction in African savannah ecosystems

    PubMed Central

    Grünewald, Claudia; Schleuning, Matthias; Böhning-Gaese, Katrin

    2017-01-01

    Southern African protected areas (PAs) harbour a great diversity of animals, which represent a large potential for wildlife tourism. In this region, global change is expected to result in vegetation changes, such as bush encroachment and increases in vegetation density. However, little is known on the influence of vegetation structure on wildlife tourists’ wildlife viewing experience and satisfaction. In this study, we collected data on vegetation structure and perceived mammal densities along 196 road transects (each 5 km long) and conducted a social survey with 651 questionnaires across four PAs in three Southern African countries. Our objectives were 1) to assess visitors’ attitude towards vegetation, 2) to test the influence of perceived mammal density and vegetation structure on the easiness to spot animals, and 3) on visitors’ satisfaction during their visit to PAs. Using a Boosted Regression Tree procedure, we found mostly negative non-linear relationships between vegetation density and wildlife tourists’ experience, and positive relationships between perceived mammal densities and wildlife tourists’ experience. In particular, wildlife tourists disliked road transects with high estimates of vegetation density. Similarly, the easiness to spot animals dropped at thresholds of high vegetation density and at perceived mammal densities lower than 46 individuals per road transect. Finally, tourists’ satisfaction declined linearly with vegetation density and dropped at mammal densities smaller than 26 individuals per transect. Our results suggest that vegetation density has important impacts on tourists’ wildlife viewing experience and satisfaction. Hence, the management of PAs in savannah landscapes should consider how tourists perceive these landscapes and their mammal diversity in order to maintain and develop a sustainable wildlife tourism. PMID:28957420

  9. Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    PubMed

    Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A

    2014-04-01

    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.

  10. The contribution of the swimbladder to buoyancy in the adult zebrafish (Danio rerio): a morphometric analysis.

    PubMed

    Robertson, George N; Lindsey, Benjamin W; Dumbarton, Tristan C; Croll, Roger P; Smith, Frank M

    2008-06-01

    Many teleost fishes use a swimbladder, a gas-filled organ in the coelomic cavity, to reduce body density toward neutral buoyancy, thus minimizing the locomotory cost of maintaining a constant depth in the water column. However, for most swimbladder-bearing teleosts, the contribution of this organ to the attainment of neutral buoyancy has not been quantified. Here, we examined the quantitative contribution of the swimbladder to buoyancy and three-dimensional stability in a small cyprinid, the zebrafish (Danio rerio). In aquaria during daylight hours, adult animals were observed at mean depths from 10.1 +/- 6.0 to 14.2 +/- 5.6 cm below the surface. Fish mass and whole-body volume were linearly correlated (r(2) = 0.96) over a wide range of body size (0.16-0.73 g); mean whole-body density was 1.01 +/- 0.09 g cm(-3). Stereological estimations of swimbladder volume from linear dimensions of lateral X-ray images and direct measurements of gas volumes recovered by puncture from the same swimbladders showed that results from these two methods were highly correlated (r(2) = 0.85). The geometric regularity of the swimbladder thus permitted its volume to be accurately estimated from a single lateral image. Mean body density in the absence of the swimbladder was 1.05 +/- 0.04 g cm(-3). The swimbladder occupied 5.1 +/- 1.4% of total body volume, thus reducing whole-body density significantly. The location of the centers of mass and buoyancy along rostro-caudal and dorso-ventral axes overlapped near the ductus communicans, a constriction between the anterior and posterior swimbladder chambers. Our work demonstrates that the swimbladder of the adult zebrafish contributes significantly to buoyancy and attitude stability. Furthermore, we describe and verify a stereological method for estimating swimbladder volume that will aid future studies of the functions of this organ. 2008 Wiley-Liss, Inc

  11. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Monte Carlo simulations of dipolar and quadrupolar linear Kihara fluids. A test of thermodynamic perturbation theory

    NASA Astrophysics Data System (ADS)

    Garzon, B.

    Several simulations of dipolar and quadrupolar linear Kihara fluids using the Monte Carlo method in the canonical ensemble have been performed. Pressure and internal energy have been directly determined from simulations and Helmholtz free energy using thermodynamic integration. Simulations were carried out for fluids of fixed elongation at two different densities and several values of temperature and dipolar or quadrupolar moment for each density. Results are compared with the perturbation theory developed by Boublik for this same type of fluid and good agreement between simulated and theoretical values was obtained especially for quadrupole fluids. Simulations are also used to obtain the liquid structure giving the first few coefficients of the expansion of pair correlation functions in terms of spherical harmonics. Estimations of the triple point temperature to critical temperature ratio are given for some dipole and quadrupole linear fluids. The stability range of the liquid phase of these substances is shortly discussed and an analysis about the opposite roles of the dipole moment and the molecular elongation on this stability is also given.

  13. High-resolution proxies for wood density variations in Terminalia superba

    PubMed Central

    De Ridder, Maaike; Van den Bulcke, Jan; Vansteenkiste, Dries; Van Loo, Denis; Dierick, Manuel; Masschaele, Bert; De Witte, Yoni; Mannes, David; Lehmann, Eberhard; Beeckman, Hans; Van Hoorebeke, Luc; Van Acker, Joris

    2011-01-01

    Background and Aims Density is a crucial variable in forest and wood science and is evaluated by a multitude of methods. Direct gravimetric methods are mostly destructive and time-consuming. Therefore, faster and semi- to non-destructive indirect methods have been developed. Methods Profiles of wood density variations with a resolution of approx. 50 µm were derived from one-dimensional resistance drillings, two-dimensional neutron scans, and three-dimensional neutron and X-ray scans. All methods were applied on Terminalia superba Engl. & Diels, an African pioneer species which sometimes exhibits a brown heart (limba noir). Key Results The use of X-ray tomography combined with a reference material permitted direct estimates of wood density. These X-ray-derived densities overestimated gravimetrically determined densities non-significantly and showed high correlation (linear regression, R2 = 0·995). When comparing X-ray densities with the attenuation coefficients of neutron scans and the amplitude of drilling resistance, a significant linear relation was found with the neutron attenuation coefficient (R2 = 0·986) yet a weak relation with drilling resistance (R2 = 0·243). When density patterns are compared, all three methods are capable of revealing the same trends. Differences are mainly due to the orientation of tree rings and the different characteristics of the indirect methods. Conclusions High-resolution X-ray computed tomography is a promising technique for research on wood cores and will be explored further on other temperate and tropical species. Further study on limba noir is necessary to reveal the causes of density variations and to determine how resistance drillings can be further refined. PMID:21131386

  14. Estimates of evapotranspiration in alkaline scrub and meadow communities of Owens Valley, California, using the Bowen-ratio, eddy-correlation, and penman-combination methods

    USGS Publications Warehouse

    Duell, Lowell F. W.

    1990-01-01

    In Owens Valley, evapotranspiration (ET) is one of the largest components of outflow in the hydrologic budget and the least understood. ET estimates for December 1983 through October 1985 were made for seven representative locations selected on the basis of geohydrology and the characteristics of phreatophytic alkaline scrub and meadow communities. The Bowen-ratio, eddy-correlation, and Penman-combination methods were used to estimate ET. The results of the analyses appear satisfactory when compared with other estimates of ET. Results by the eddy-correlation method are for a direct and a residual latent-heat flux that is based on sensible-heat flux and energy-budget measurements. Penman-combination potential-ET estimates were determined to be unusable because they overestimated actual ET. Modification of the psychrometer constant of this method to account for differences between heat-diffusion resistance and vapor-diffusion resistance permitted actual ET to be estimated. The methods described in this report may be used for studies in similar semiarid and arid rangeland areas in the Western United States. Meteorological data for three field sites are included in the appendix of this report. Simple linear regression analysis indicates that ET estimates are correlated to air temperature, vapor-density deficit, and net radiation. Estimates of annual ET range from 301 millimeters at a low-density scrub site to 1,137 millimeters at a high-density meadow site. The monthly percentage of annual ET was determined to be similar for all sites studied.

  15. Capturing contextual effects in spectro-temporal receptive fields.

    PubMed

    Westö, Johan; May, Patrick J C

    2016-09-01

    Spectro-temporal receptive fields (STRFs) are thought to provide descriptive images of the computations performed by neurons along the auditory pathway. However, their validity can be questioned because they rely on a set of assumptions that are probably not fulfilled by real neurons exhibiting contextual effects, that is, nonlinear interactions in the time or frequency dimension that cannot be described with a linear filter. We used a novel approach to investigate how a variety of contextual effects, due to facilitating nonlinear interactions and synaptic depression, affect different STRF models, and if these effects can be captured with a context field (CF). Contextual effects were incorporated in simulated networks of spiking neurons, allowing one to define the true STRFs of the neurons. This, in turn, made it possible to evaluate the performance of each STRF model by comparing the estimations with the true STRFs. We found that currently used STRF models are particularly poor at estimating inhibitory regions. Specifically, contextual effects make estimated STRFs dependent on stimulus density in a contrasting fashion: inhibitory regions are underestimated at lower densities while artificial inhibitory regions emerge at higher densities. The CF was found to provide a solution to this dilemma, but only when it is used together with a generalized linear model. Our results therefore highlight the limitations of the traditional STRF approach and provide useful recipes for how different STRF models and stimuli can be used to arrive at reliable quantifications of neural computations in the presence of contextual effects. The results therefore push the purpose of STRF analysis from simply finding an optimal stimulus toward describing context-dependent computations of neurons along the auditory pathway. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Classification of longitudinal data through a semiparametric mixed-effects model based on lasso-type estimators.

    PubMed

    Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian

    2015-06-01

    We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durrer, Ruth; Tansella, Vittorio, E-mail: ruth.durrer@unige.ch, E-mail: vittorio.tansella@unige.ch

    We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxymore » number counts.« less

  18. Meta-regression analysis of the effect of trans fatty acids on low-density lipoprotein cholesterol.

    PubMed

    Allen, Bruce C; Vincent, Melissa J; Liska, DeAnn; Haber, Lynne T

    2016-12-01

    We conducted a meta-regression of controlled clinical trial data to investigate quantitatively the relationship between dietary intake of industrial trans fatty acids (iTFA) and increased low-density lipoprotein cholesterol (LDL-C). Previous regression analyses included insufficient data to determine the nature of the dose response in the low-dose region and have nonetheless assumed a linear relationship between iTFA intake and LDL-C levels. This work contributes to the previous work by 1) including additional studies examining low-dose intake (identified using an evidence mapping procedure); 2) investigating a range of curve shapes, including both linear and nonlinear models; and 3) using Bayesian meta-regression to combine results across trials. We found that, contrary to previous assumptions, the linear model does not acceptably fit the data, while the nonlinear, S-shaped Hill model fits the data well. Based on a conservative estimate of the degree of intra-individual variability in LDL-C (0.1 mmoL/L), as an estimate of a change in LDL-C that is not adverse, a change in iTFA intake of 2.2% of energy intake (%en) (corresponding to a total iTFA intake of 2.2-2.9%en) does not cause adverse effects on LDL-C. The iTFA intake associated with this change in LDL-C is substantially higher than the average iTFA intake (0.5%en). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Cosmic velocity-gravity relation in redshift space

    NASA Astrophysics Data System (ADS)

    Colombi, Stéphane; Chodorowski, Michał J.; Teyssier, Romain

    2007-02-01

    We propose a simple way to estimate the parameter β ~= Ω0.6/b from 3D galaxy surveys, where Ω is the non-relativistic matter-density parameter of the Universe and b is the bias between the galaxy distribution and the total matter distribution. Our method consists in measuring the relation between the cosmological velocity and gravity fields, and thus requires peculiar velocity measurements. The relation is measured directly in redshift space, so there is no need to reconstruct the density field in real space. In linear theory, the radial components of the gravity and velocity fields in redshift space are expected to be tightly correlated, with a slope given, in the distant observer approximation, by We test extensively this relation using controlled numerical experiments based on a cosmological N-body simulation. To perform the measurements, we propose a new and rather simple adaptive interpolation scheme to estimate the velocity and the gravity field on a grid. One of the most striking results is that non-linear effects, including `fingers of God', affect mainly the tails of the joint probability distribution function (PDF) of the velocity and gravity field: the 1-1.5 σ region around the maximum of the PDF is dominated by the linear theory regime, both in real and redshift space. This is understood explicitly by using the spherical collapse model as a proxy of non-linear dynamics. Applications of the method to real galaxy catalogues are discussed, including a preliminary investigation on homogeneous (volume-limited) `galaxy' samples extracted from the simulation with simple prescriptions based on halo and substructure identification, to quantify the effects of the bias between the galaxy distribution and the total matter distribution, as well as the effects of shot noise.

  20. Linear increases in carbon nanotube density through multiple transfer technique.

    PubMed

    Shulaker, Max M; Wei, Hai; Patil, Nishant; Provine, J; Chen, Hong-Yu; Wong, H-S P; Mitra, Subhasish

    2011-05-11

    We present a technique to increase carbon nanotube (CNT) density beyond the as-grown CNT density. We perform multiple transfers, whereby we transfer CNTs from several growth wafers onto the same target surface, thereby linearly increasing CNT density on the target substrate. This process, called transfer of nanotubes through multiple sacrificial layers, is highly scalable, and we demonstrate linear CNT density scaling up to 5 transfers. We also demonstrate that this linear CNT density increase results in an ideal linear increase in drain-source currents of carbon nanotube field effect transistors (CNFETs). Experimental results demonstrate that CNT density can be improved from 2 to 8 CNTs/μm, accompanied by an increase in drain-source CNFET current from 4.3 to 17.4 μA/μm.

  1. Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busti, V.C.; Clarkson, C.; Holanda, R.F.L., E-mail: vinicius.busti@uct.ac.za, E-mail: holanda@uepb.edu.br, E-mail: chris.clarkson@uct.ac.za

    2013-11-01

    Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales — which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbationmore » theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the inferred density parameters, but also introduces a bias in the posterior value.« less

  2. Estimation of Dry Fracture Weakness, Porosity, and Fluid Modulus Using Observable Seismic Reflection Data in a Gas-Bearing Reservoir

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2017-05-01

    Fracture detection and fluid identification are important tasks for a fractured reservoir characterization. Our goal is to demonstrate a direct approach to utilize azimuthal seismic data to estimate fluid bulk modulus, porosity, and dry fracture weaknesses, which decreases the uncertainty of fluid identification. Combining Gassmann's (Vier. der Natur. Gesellschaft Zürich 96:1-23, 1951) equations and linear-slip model, we first establish new simplified expressions of stiffness parameters for a gas-bearing saturated fractured rock with low porosity and small fracture density, and then we derive a novel PP-wave reflection coefficient in terms of dry background rock properties (P-wave and S-wave moduli, and density), fracture (dry fracture weaknesses), porosity, and fluid (fluid bulk modulus). A Bayesian Markov chain Monte Carlo nonlinear inversion method is proposed to estimate fluid bulk modulus, porosity, and fracture weaknesses directly from azimuthal seismic data. The inversion method yields reasonable estimates in the case of synthetic data containing a moderate noise and stable results on real data.

  3. A simple approach to nonlinear estimation of physical systems

    USGS Publications Warehouse

    Christakos, G.

    1988-01-01

    Recursive algorithms for estimating the states of nonlinear physical systems are developed. This requires some key hypotheses regarding the structure of the underlying processes. Members of this class of random processes have several desirable properties for the nonlinear estimation of random signals. An assumption is made about the form of the estimator, which may then take account of a wide range of applications. Under the above assumption, the estimation algorithm is mathematically suboptimal but effective and computationally attractive. It may be compared favorably to Taylor series-type filters, nonlinear filters which approximate the probability density by Edgeworth or Gram-Charlier series, as well as to conventional statistical linearization-type estimators. To link theory with practice, some numerical results for a simulated system are presented, in which the responses from the proposed and the extended Kalman algorithms are compared. ?? 1988.

  4. Stochastic Model of Seasonal Runoff Forecasts

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, Roman; Watada, Leslie M.

    1986-03-01

    Each year the National Weather Service and the Soil Conservation Service issue a monthly sequence of five (or six) categorical forecasts of the seasonal snowmelt runoff volume. To describe uncertainties in these forecasts for the purposes of optimal decision making, a stochastic model is formulated. It is a discrete-time, finite, continuous-space, nonstationary Markov process. Posterior densities of the actual runoff conditional upon a forecast, and transition densities of forecasts are obtained from a Bayesian information processor. Parametric densities are derived for the process with a normal prior density of the runoff and a linear model of the forecast error. The structure of the model and the estimation procedure are motivated by analyses of forecast records from five stations in the Snake River basin, from the period 1971-1983. The advantages of supplementing the current forecasting scheme with a Bayesian analysis are discussed.

  5. Sampling procedures for inventory of commercial volume tree species in Amazon Forest.

    PubMed

    Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R

    2017-01-01

    The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.

  6. Influences of spatial and temporal variation on fish-habitat relationships defined by regression quantiles

    USGS Publications Warehouse

    Dunham, J.B.; Cade, B.S.; Terrell, J.W.

    2002-01-01

    We used regression quantiles to model potentially limiting relationships between the standing crop of cutthroat trout Oncorhynchus clarki and measures of stream channel morphology. Regression quantile models indicated that variation in fish density was inversely related to the width:depth ratio of streams but not to stream width or depth alone. The spatial and temporal stability of model predictions were examined across years and streams, respectively. Variation in fish density with width:depth ratio (10th-90th regression quantiles) modeled for streams sampled in 1993-1997 predicted the variation observed in 1998-1999, indicating similar habitat relationships across years. Both linear and nonlinear models described the limiting relationships well, the latter performing slightly better. Although estimated relationships were transferable in time, results were strongly dependent on the influence of spatial variation in fish density among streams. Density changes with width:depth ratio in a single stream were responsible for the significant (P < 0.10) negative slopes estimated for the higher quantiles (>80th). This suggests that stream-scale factors other than width:depth ratio play a more direct role in determining population density. Much of the variation in densities of cutthroat trout among streams was attributed to the occurrence of nonnative brook trout Salvelinus fontinalis (a possible competitor) or connectivity to migratory habitats. Regression quantiles can be useful for estimating the effects of limiting factors when ecological responses are highly variable, but our results indicate that spatiotemporal variability in the data should be explicitly considered. In this study, data from individual streams and stream-specific characteristics (e.g., the occurrence of nonnative species and habitat connectivity) strongly affected our interpretation of the relationship between width:depth ratio and fish density.

  7. Residential particulate matter and distance to roadways in relation to mammographic density: results from the Nurses' Health Studies.

    PubMed

    DuPre, Natalie C; Hart, Jaime E; Bertrand, Kimberly A; Kraft, Peter; Laden, Francine; Tamimi, Rulla M

    2017-11-23

    High mammographic density is a strong, well-established breast cancer risk factor. Three studies conducted in various smaller geographic settings reported inconsistent findings between air pollution and mammographic density. We assessed whether particulate matter (PM) exposures (PM 2.5 , PM 2.5-10 , and PM 10 ) and distance to roadways were associated with mammographic density among women residing across the United States. The Nurses' Health Studies are prospective cohorts for whom a subset has screening mammograms from the 1990s (interquartile range 1990-1999). PM was estimated using spatio-temporal models linked to residential addresses. Among 3258 women (average age at mammogram 52.7 years), we performed multivariable linear regression to assess associations between square-root-transformed mammographic density and PM within 1 and 3 years before the mammogram. For linear regression estimates of PM in relation to untransformed mammographic density outcomes, bootstrapped robust standard errors are used to calculate 95% confidence intervals (CIs). Analyses were stratified by menopausal status and region of residence. Recent PM and distance to roadways were not associated with mammographic density in premenopausal women (PM 2.5 within 3 years before mammogram β = 0.05, 95% CI -0.16, 0.27; PM 2.5-10 β = 0, 95%, CI -0.15, 0.16; PM 10 β = 0.02, 95% CI -0.10, 0.13) and postmenopausal women (PM 2.5 within 3 years before mammogram β = -0.05, 95% CI -0.27, 0.17; PM 2.5-10 β = -0.01, 95% CI -0.16, 0.14; PM 10 β = -0.02, 95% CI -0.13, 0.09). Largely null associations were observed within regions. Suggestive associations were observed among postmenopausal women in the Northeast (n = 745), where a 10-μg/m 3 increase in PM 2.5 within 3 years before the mammogram was associated with 3.4 percentage points higher percent mammographic density (95% CI -0.5, 7.3). These findings do not support that recent PM or roadway exposures influence mammographic density. Although PM was largely not associated with mammographic density, we cannot rule out the role of PM during earlier exposure time windows and possible associations among northeastern postmenopausal women.

  8. Mutual information estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.

  9. Porphyry copper deposit density

    USGS Publications Warehouse

    Singer, Donald A.; Berger, Vladimir; Menzie, W. David; Berger, Byron R.

    2005-01-01

    Estimating numbers of undiscovered mineral deposits has been a source of unease among economic geologists yet is a fundamental task in considering future supplies of resources. Estimates can be based on frequencies of deposits per unit of permissive area in control areas around the world in the same way that grade and tonnage frequencies are models of sizes and qualities of undiscovered deposits. To prevent biased estimates it is critical that, for a particular deposit type, these deposit density models be internally consistent with descriptive and grade and tonnage models of the same type. In this analysis only deposits and prospects that are likely to be included in future grade and tonnage models are employed, and deposits that have mineralization or alteration separated by less than an arbitrary but consistent distance—2 km for porphyry copper deposits—are combined into one deposit. Only 286 deposits and prospects that have more than half of the deposit not covered by postmineral rocks, sediments, or ice were counted.Nineteen control areas were selected and outlined along borders of hosting magmatic arc terranes based on three main features: (1) extensive exploration for porphyry copper deposits, (2) definable geologic settings of the porphyry copper deposits in island and continental volcanic-arc subduction-boundary zones, and (3) diversity of epochs of porphyry copper deposit formation.Porphyry copper deposit densities vary from 2 to 128 deposits per 100,000 km2 of exposed permissive rock, and the density histogram is skewed to high values. Ninety percent of the control areas have densities of four or more deposits, 50 percent have densities of 15 or more deposits, and 10 percent have densities of 35 or more deposits per 100,000 km2. Deposit density is not related to age or depth of emplacement. Porphyry copper deposit density is inversely related to the exposed area of permissive rock. The linear regression line and confidence limits constructed with the 19 control areas can be used to estimate the number of undiscovered deposits, given the size of a permissive area. In an example of the use of the equations, we estimate a 90 percent chance of at least four, a 50 percent chance of at least 11, and a 10 percent chance of at least 34 undiscovered porphyry copper deposits in the exposed parts of the Andean belt of Antarctica, which has no known deposits in a permissive area of about 76,000 km2. Measures of densities of deposits presented here allow rather simple yet robust estimation of the number of undiscovered porphyry copper deposits in exposed or covered permissive terranes.

  10. Estimation of Compaction Parameters Based on Soil Classification

    NASA Astrophysics Data System (ADS)

    Lubis, A. S.; Muis, Z. A.; Hastuty, I. P.; Siregar, I. M.

    2018-02-01

    Factors that must be considered in compaction of the soil works were the type of soil material, field control, maintenance and availability of funds. Those problems then raised the idea of how to estimate the density of the soil with a proper implementation system, fast, and economical. This study aims to estimate the compaction parameter i.e. the maximum dry unit weight (γ dmax) and optimum water content (Wopt) based on soil classification. Each of 30 samples were being tested for its properties index and compaction test. All of the data’s from the laboratory test results, were used to estimate the compaction parameter values by using linear regression and Goswami Model. From the research result, the soil types were A4, A-6, and A-7 according to AASHTO and SC, SC-SM, and CL based on USCS. By linear regression, the equation for estimation of the maximum dry unit weight (γdmax *)=1,862-0,005*FINES- 0,003*LL and estimation of the optimum water content (wopt *)=- 0,607+0,362*FINES+0,161*LL. By Goswami Model (with equation Y=mLogG+k), for estimation of the maximum dry unit weight (γdmax *) with m=-0,376 and k=2,482, for estimation of the optimum water content (wopt *) with m=21,265 and k=-32,421. For both of these equations a 95% confidence interval was obtained.

  11. Mapping forest canopy fuels in Yellowstone National Park using lidar and hyperspectral data

    NASA Astrophysics Data System (ADS)

    Halligan, Kerry Quinn

    The severity and size of wildland fires in the forested western U.S have increased in recent years despite improvements in fire suppression efficiency. This, along with increased density of homes in the wildland-urban interface, has resulted in high costs for fire management and increased risks to human health, safety and property. Crown fires, in comparison to surface fires, pose an especially high risk due to their intensity and high rate of spread. Crown fire models require a range of quantitative fuel parameters which can be difficult and costly to obtain, but advances in lidar and hyperspectral sensor technologies hold promise for delivering these inputs. Further research is needed, however, to assess the strengths and limitations of these technologies and the most appropriate analysis methodologies for estimating crown fuel parameters from these data. This dissertation focuses on retrieving critical crown fuel parameters, including canopy height, canopy bulk density and proportion of dead canopy fuel, from airborne lidar and hyperspectral data. Remote sensing data were used in conjunction with detailed field data on forest parameters and surface reflectance measurements. A new method was developed for retrieving Digital Surface Model (DSM) and Digital Canopy Models (DCM) from first return lidar data. Validation data on individual tree heights demonstrated the high accuracy (r2 0.95) of the DCMs developed via this new algorithm. Lidar-derived DCMs were used to estimate critical crown fire parameters including available canopy fuel, canopy height and canopy bulk density with linear regression model r2 values ranging from 0.75 to 0.85. Hyperspectral data were used in conjunction with Spectral Mixture Analysis (SMA) to assess fuel quality in the form of live versus dead canopy proportions. Severity and stage of insect-caused forest mortality were estimated using the fractional abundance of green vegetation, non-photosynthetic vegetation and shade obtained from SMA. Proportion of insect attack was estimated with a linear model producing an r2 of 0.6 using SMA and bark endmembers from image and reference libraries. Fraction of red attack, with a possible link to increased crown fire risk, was estimated with an r2 of 0.45.

  12. Gyrokinetic modeling of impurity peaking in JET H-mode plasmas

    NASA Astrophysics Data System (ADS)

    Manas, P.; Camenen, Y.; Benkadda, S.; Weisen, H.; Angioni, C.; Casson, F. J.; Giroud, C.; Gelfusa, M.; Maslov, M.

    2017-06-01

    Quantitative comparisons are presented between gyrokinetic simulations and experimental values of the carbon impurity peaking factor in a database of JET H-modes during the carbon wall era. These plasmas feature strong NBI heating and hence high values of toroidal rotation and corresponding gradient. Furthermore, the carbon profiles present particularly interesting shapes for fusion devices, i.e., hollow in the core and peaked near the edge. Dependencies of the experimental carbon peaking factor ( R / L nC ) on plasma parameters are investigated via multilinear regressions. A marked correlation between R / L nC and the normalised toroidal rotation gradient is observed in the core, which suggests an important role of the rotation in establishing hollow carbon profiles. The carbon peaking factor is then computed with the gyrokinetic code GKW, using a quasi-linear approach, supported by a few non-linear simulations. The comparison of the quasi-linear predictions to the experimental values at mid-radius reveals two main regimes. At low normalised collisionality, ν * , and T e / T i < 1 , the gyrokinetic simulations quantitatively recover experimental carbon density profiles, provided that rotodiffusion is taken into account. In contrast, at higher ν * and T e / T i > 1 , the very hollow experimental carbon density profiles are never predicted by the simulations and the carbon density peaking is systematically over estimated. This points to a possible missing ingredient in this regime.

  13. Linear and Non-Linear Dielectric Response of Periodic Systems from Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Umari, Paolo

    2006-03-01

    We present a novel approach that allows to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wavefunction, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence. The polarization is sampled through forward-walking. This approach has been validated for the case of the polarizability of an isolated hydrogen atom, and then applied to a periodic system. We then calculate the linear susceptibility and second-order hyper-susceptibility of molecular-hydrogen chains whith different bond-length alternations, and assess the quality of nodal surfaces derived from density-functional theory or from Hartree-Fock. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.P. Umari, A.J. Williamson, G. Galli, and N. MarzariPhys. Rev. Lett. 95, 207602 (2005).

  14. A quantum extended Kalman filter

    NASA Astrophysics Data System (ADS)

    Emzir, Muhammad F.; Woolley, Matthew J.; Petersen, Ian R.

    2017-06-01

    In quantum physics, a stochastic master equation (SME) estimates the state (density operator) of a quantum system in the Schrödinger picture based on a record of measurements made on the system. In the Heisenberg picture, the SME is a quantum filter. For a linear quantum system subject to linear measurements and Gaussian noise, the dynamics may be described by quantum stochastic differential equations (QSDEs), also known as quantum Langevin equations, and the quantum filter reduces to a so-called quantum Kalman filter. In this article, we introduce a quantum extended Kalman filter (quantum EKF), which applies a commutative approximation and a time-varying linearization to systems of nonlinear QSDEs. We will show that there are conditions under which a filter similar to a classical EKF can be implemented for quantum systems. The boundedness of estimation errors and the filtering problem with ‘state-dependent’ covariances for process and measurement noises are also discussed. We demonstrate the effectiveness of the quantum EKF by applying it to systems that involve multiple modes, nonlinear Hamiltonians, and simultaneous jump-diffusive measurements.

  15. Bayesian linearized amplitude-versus-frequency inversion for quality factor and its application

    NASA Astrophysics Data System (ADS)

    Yang, Xinchao; Teng, Long; Li, Jingnan; Cheng, Jiubing

    2018-06-01

    We propose a straightforward attenuation inversion method by utilizing the amplitude-versus-frequency (AVF) characteristics of seismic data. A new linearized approximation equation of the angle and frequency dependent reflectivity in viscoelastic media is derived. We then use the presented equation to implement the Bayesian linear AVF inversion. The inversion result includes not only P-wave and S-wave velocities, and densities, but also P-wave and S-wave quality factors. Synthetic tests show that the AVF inversion surpasses the AVA inversion for quality factor estimation. However, a higher signal noise ratio (SNR) of data is necessary for the AVF inversion. To show its feasibility, we apply both the new Bayesian AVF inversion and conventional AVA inversion to a tight gas reservoir data in Sichuan Basin in China. Considering the SNR of the field data, a combination of AVF inversion for attenuation parameters and AVA inversion for elastic parameters is recommended. The result reveals that attenuation estimations could serve as a useful complement in combination with the AVA inversion results for the detection of tight gas reservoirs.

  16. A question of separation: disentangling tracer bias and gravitational non-linearity with counts-in-cells statistics

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Feix, M.; Codis, S.; Pichon, C.; Bernardeau, F.; L'Huillier, B.; Kim, J.; Hong, S. E.; Laigle, C.; Park, C.; Shin, J.; Pogosyan, D.

    2018-02-01

    Starting from a very accurate model for density-in-cells statistics of dark matter based on large deviation theory, a bias model for the tracer density in spheres is formulated. It adopts a mean bias relation based on a quadratic bias model to relate the log-densities of dark matter to those of mass-weighted dark haloes in real and redshift space. The validity of the parametrized bias model is established using a parametrization-independent extraction of the bias function. This average bias model is then combined with the dark matter PDF, neglecting any scatter around it: it nevertheless yields an excellent model for densities-in-cells statistics of mass tracers that is parametrized in terms of the underlying dark matter variance and three bias parameters. The procedure is validated on measurements of both the one- and two-point statistics of subhalo densities in the state-of-the-art Horizon Run 4 simulation showing excellent agreement for measured dark matter variance and bias parameters. Finally, it is demonstrated that this formalism allows for a joint estimation of the non-linear dark matter variance and the bias parameters using solely the statistics of subhaloes. Having verified that galaxy counts in hydrodynamical simulations sampled on a scale of 10 Mpc h-1 closely resemble those of subhaloes, this work provides important steps towards making theoretical predictions for density-in-cells statistics applicable to upcoming galaxy surveys like Euclid or WFIRST.

  17. Neural network approach to quantum-chemistry data: accurate prediction of density functional theory energies.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2009-08-21

    Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6+/-0.2 kcal mol(-1). In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.

  18. On Algorithms for Generating Computationally Simple Piecewise Linear Classifiers

    DTIC Science & Technology

    1989-05-01

    suffers. - Waveform classification, e.g. speech recognition, seismic analysis (i.e. discrimination between earthquakes and nuclear explosions), target...assuming Gaussian distributions (B-G) d) Bayes classifier with probability densities estimated with the k-N-N method (B- kNN ) e) The -arest neighbour...range of classifiers are chosen including a fast, easy computable and often used classifier (B-G), reliable and complex classifiers (B- kNN and NNR

  19. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process

    PubMed Central

    Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.

    2016-01-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures. PMID:27265878

  20. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process.

    PubMed

    Galindo, I; Romero, M C; Sánchez, N; Morales, J M

    2016-06-06

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.

  1. Quantitative volcanic susceptibility analysis of Lanzarote and Chinijo Islands based on kernel density estimation via a linear diffusion process

    NASA Astrophysics Data System (ADS)

    Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.

    2016-06-01

    Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.

  2. Nonlinear Attitude Filtering Methods

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Crassidis, John L.; Cheng, Yang

    2005-01-01

    This paper provides a survey of modern nonlinear filtering methods for attitude estimation. Early applications relied mostly on the extended Kalman filter for attitude estimation. Since these applications, several new approaches have been developed that have proven to be superior to the extended Kalman filter. Several of these approaches maintain the basic structure of the extended Kalman filter, but employ various modifications in order to provide better convergence or improve other performance characteristics. Examples of such approaches include: filter QUEST, extended QUEST, the super-iterated extended Kalman filter, the interlaced extended Kalman filter, and the second-order Kalman filter. Filters that propagate and update a discrete set of sigma points rather than using linearized equations for the mean and covariance are also reviewed. A two-step approach is discussed with a first-step state that linearizes the measurement model and an iterative second step to recover the desired attitude states. These approaches are all based on the Gaussian assumption that the probability density function is adequately specified by its mean and covariance. Other approaches that do not require this assumption are reviewed, including particle filters and a Bayesian filter based on a non-Gaussian, finite-parameter probability density function on SO(3). Finally, the predictive filter, nonlinear observers and adaptive approaches are shown. The strengths and weaknesses of the various approaches are discussed.

  3. Variational and robust density fitting of four-center two-electron integrals in local metrics

    NASA Astrophysics Data System (ADS)

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjærgaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Høst, Stinne; Salek, Paweł

    2008-09-01

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  4. Variational and robust density fitting of four-center two-electron integrals in local metrics.

    PubMed

    Reine, Simen; Tellgren, Erik; Krapp, Andreas; Kjaergaard, Thomas; Helgaker, Trygve; Jansik, Branislav; Host, Stinne; Salek, Paweł

    2008-09-14

    Density fitting is an important method for speeding up quantum-chemical calculations. Linear-scaling developments in Hartree-Fock and density-functional theories have highlighted the need for linear-scaling density-fitting schemes. In this paper, we present a robust variational density-fitting scheme that allows for solving the fitting equations in local metrics instead of the traditional Coulomb metric, as required for linear scaling. Results of fitting four-center two-electron integrals in the overlap and the attenuated Gaussian damped Coulomb metric are presented, and we conclude that density fitting can be performed in local metrics at little loss of chemical accuracy. We further propose to use this theory in linear-scaling density-fitting developments.

  5. Hunting high and low: disentangling primordial and late-time non-Gaussianity with cosmic densities in spheres

    NASA Astrophysics Data System (ADS)

    Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.

    2018-03-01

    Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.

  6. Contributions of Cu-rich clusters, dislocation loops and nanovoids to the irradiation-induced hardening of Cu-bearing low-Ni reactor pressure vessel steels

    NASA Astrophysics Data System (ADS)

    Bergner, F.; Gillemot, F.; Hernández-Mayoral, M.; Serrano, M.; Török, G.; Ulbricht, A.; Altstadt, E.

    2015-06-01

    Dislocation loops, nanovoids and Cu-rich clusters (CRPs) are known to represent obstacles for dislocation glide in neutron-irradiated reactor pressure vessel (RPV) steels, but a consistent experimental determination of the respective obstacle strengths is still missing. A set of Cu-bearing low-Ni RPV steels and model alloys was characterized by means of SANS and TEM in order to specify mean size and number density of loops, nanovoids and CRPs. The obstacle strengths of these families were estimated by solving an over-determined set of linear equations. We have found that nanovoids are stronger than loops and loops are stronger than CRPs. Nevertheless, CRPs contribute most to irradiation hardening because of their high number density. Nanovoids were only observed for neutron fluences beyond typical end-of-life conditions of RPVs. The estimates of the obstacle strength are critically compared with reported literature data.

  7. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  8. Estimation of whole body fat from appendicular soft tissue from peripheral quantitative computed tomography in adolescent girls

    PubMed Central

    Lee, Vinson R.; Blew, Rob M.; Farr, Josh N.; Tomas, Rita; Lohman, Timothy G.; Going, Scott B.

    2013-01-01

    Objective Assess the utility of peripheral quantitative computed tomography (pQCT) for estimating whole body fat in adolescent girls. Research Methods and Procedures Our sample included 458 girls (aged 10.7 ± 1.1y, mean BMI = 18.5 ± 3.3 kg/m2) who had DXA scans for whole body percent fat (DXA %Fat). Soft tissue analysis of pQCT scans provided thigh and calf subcutaneous percent fat and thigh and calf muscle density (muscle fat content surrogates). Anthropometric variables included weight, height and BMI. Indices of maturity included age and maturity offset. The total sample was split into validation (VS; n = 304) and cross-validation (CS; n = 154) samples. Linear regression was used to develop prediction equations for estimating DXA %Fat from anthropometric variables and pQCT-derived soft tissue components in VS and the best prediction equation was applied to CS. Results Thigh and calf SFA %Fat were positively correlated with DXA %Fat (r = 0.84 to 0.85; p <0.001) and thigh and calf muscle densities were inversely related to DXA %Fat (r = −0.30 to −0.44; p < 0.001). The best equation for estimating %Fat included thigh and calf SFA %Fat and thigh and calf muscle density (adj. R2 = 0.90; SEE = 2.7%). Bland-Altman analysis in CS showed accurate estimates of percent fat (adj. R2 = 0.89; SEE = 2.7%) with no bias. Discussion Peripheral QCT derived indices of adiposity can be used to accurately estimate whole body percent fat in adolescent girls. PMID:25147482

  9. New robust statistical procedures for the polytomous logistic regression models.

    PubMed

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  10. Quasi-linear regime of gravitational instability: Implication to density-velocity relation

    NASA Technical Reports Server (NTRS)

    Shandarin, Sergei F.

    1993-01-01

    The well known linear relation between density and peculiar velocity distributions is a powerful tool for studying the large-scale structure in the Universe. Potentially it can test the gravitational instability theory and measure Omega. At present it is used in both ways: the velocity is reconstructed, provided the density is given, and vice versa. Reconstructing the density from the velocity field usually makes use of the Zel'dovich approximation. However, the standard linear approximation in Eulerian space is used when the velocity is reconstructed from the density distribution. I show that the linearized Zel'dovich approximation, in other words the linear approximation in the Lagrangian space, is more accurate for reconstructing velocity. In principle, a simple iteration technique can recover both the density and velocity distributions in Lagrangian space, but its practical application may need an additional study.

  11. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  12. Quantifying Melt Ponds in the Beaufort MIZ using Linear Support Vector Machines from High Resolution Panchromatic Images

    NASA Astrophysics Data System (ADS)

    Ortiz, M.; Graber, H. C.; Wilkinson, J.; Nyman, L. M.; Lund, B.

    2017-12-01

    Much work has been done on determining changes in summer ice albedo and morphological properties of melt ponds such as depth, shape and distribution using in-situ measurements and satellite-based sensors. Although these studies have dedicated much pioneering work in this area, there still lacks sufficient spatial and temporal scales. We present a prototype algorithm using Linear Support Vector Machines (LSVMs) designed to quantify the evolution of melt pond fraction from a recently government-declassified high-resolution panchromatic optical dataset. The study area of interest lies within the Beaufort marginal ice zone (MIZ), where several in-situ instruments were deployed by the British Antarctic Survey in joint with the MIZ Program, from April-September, 2014. The LSVM uses four dimensional feature data from the intensity image itself, and from various textures calculated from a modified first-order histogram technique using probability density of occurrences. We explore both the temporal evolution of melt ponds and spatial statistics such as pond fraction, pond area, and number pond density, to name a few. We also introduce a linear regression model that can potentially be used to estimate average pond area by ingesting several melt pond statistics and shape parameters.

  13. Fractional Gaussian model in global optimization

    NASA Astrophysics Data System (ADS)

    Dimri, V. P.; Srivastava, R. P.

    2009-12-01

    Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.

  14. Application of a constant hole volume Sanchez-Lacombe equation of state to mixtures relevant to polymeric foaming.

    PubMed

    von Konigslow, Kier; Park, Chul B; Thompson, Russell B

    2018-06-06

    A variant of the Sanchez-Lacombe equation of state is applied to several polymers, blowing agents, and saturated mixtures of interest to the polymer foaming industry. These are low-density polyethylene-carbon dioxide and polylactide-carbon dioxide saturated mixtures as well as polystyrene-carbon dioxide-dimethyl ether and polystyrene-carbon dioxide-nitrogen ternary saturated mixtures. Good agreement is achieved between theoretically predicted and experimentally determined solubilities, both for binary and ternary mixtures. Acceptable agreement with swelling ratios is found with no free parameters. Up-to-date pure component Sanchez-Lacombe characteristic parameters are provided for carbon dioxide, dimethyl ether, low-density polyethylene, nitrogen, polylactide, linear and branched polypropylene, and polystyrene. Pure fluid low-density polyethylene and nitrogen parameters exhibit more moderate success while still providing acceptable quantitative estimations. Mixture estimations are found to have more moderate success where pure components are not as well represented. The Sanchez-Lacombe equation of state is found to correctly predict the anomalous reversal of solubility temperature dependence for low critical point fluids through the observation of this behaviour in polystyrene nitrogen mixtures.

  15. Evaluation of Techniques Used to Estimate Cortical Feature Maps

    PubMed Central

    Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2011-01-01

    Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537

  16. Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations

    NASA Astrophysics Data System (ADS)

    Zhi, Longxiao; Gu, Hanming

    2018-03-01

    The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.

  17. A generalized partially linear mean-covariance regression model for longitudinal proportional data, with applications to the analysis of quality of life data from cancer clinical trials.

    PubMed

    Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng

    2017-05-30

    Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Free energy of adhesion of lipid bilayers on silica surfaces

    NASA Astrophysics Data System (ADS)

    Schneemilch, M.; Quirke, N.

    2018-05-01

    The free energy of adhesion per unit area (hereafter referred to as the adhesion strength) of lipid arrays on surfaces is a key parameter that determines the nature of the interaction between materials and biological systems. Here we report classical molecular simulations of water and 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC) lipid bilayers at model silica surfaces with a range of silanol densities and structures. We employ a novel technique that enables us to estimate the adhesion strength of supported lipid bilayers in the presence of water. We find that silanols on the silica surface form hydrogen bonds with water molecules and that the water immersion enthalpy for all surfaces varies linearly with the surface density of these hydrogen bonds. The adhesion strength of lipid bilayers is a linear function of the surface density of hydrogen bonds formed between silanols and the lipid molecules on crystalline surfaces. Approximately 20% of isolated silanols form such bonds but more than 99% of mutually interacting geminal silanols do not engage in hydrogen bonding with water. On amorphous silica, the bilayer displays much stronger adhesion than expected from the crystalline surface data. We discuss the implications of these results for nanoparticle toxicity.

  19. Simultaneous use of camera and probe diagnostics to unambiguously identify and study the dynamics of multiple underlying instabilities during the route to plasma turbulence.

    PubMed

    Thakur, S C; Brandt, C; Light, A; Cui, L; Gosselin, J J; Tynan, G R

    2014-11-01

    We use multiple-tip Langmuir probes and fast imaging to unambiguously identify and study the dynamics of underlying instabilities during the controlled route to fully-developed plasma turbulence in a linear magnetized helicon plasma device. Langmuir probes measure radial profiles of electron temperature, plasma density and potential; from which we compute linear growth rates of instabilities, cross-phase between density and potential fluctuations, Reynold's stress, particle flux, vorticity, time-delay estimated velocity, etc. Fast imaging complements the 1D probe measurements by providing temporally and spatially resolved 2D details of plasma structures associated with the instabilities. We find that three radially separated plasma instabilities exist simultaneously. Density gradient driven resistive drift waves propagating in the electron diamagnetic drift direction separate the plasma into an edge region dominated by strong, velocity shear driven Kelvin-Helmholtz instabilities and a central core region which shows coherent Rayleigh-Taylor modes propagating in the ion diamagnetic drift direction. The simultaneous, complementary use of both probes and camera was crucial to identify the instabilities and understand the details of the very rich plasma dynamics.

  20. Quantum Chemically Estimated Abraham Solute Parameters Using Multiple Solvent-Water Partition Coefficients and Molecular Polarizability.

    PubMed

    Liang, Yuzhen; Xiong, Ruichang; Sandler, Stanley I; Di Toro, Dominic M

    2017-09-05

    Polyparameter Linear Free Energy Relationships (pp-LFERs), also called Linear Solvation Energy Relationships (LSERs), are used to predict many environmentally significant properties of chemicals. A method is presented for computing the necessary chemical parameters, the Abraham parameters (AP), used by many pp-LFERs. It employs quantum chemical calculations and uses only the chemical's molecular structure. The method computes the Abraham E parameter using density functional theory computed molecular polarizability and the Clausius-Mossotti equation relating the index refraction to the molecular polarizability, estimates the Abraham V as the COSMO calculated molecular volume, and computes the remaining AP S, A, and B jointly with a multiple linear regression using sixty-five solvent-water partition coefficients computed using the quantum mechanical COSMO-SAC solvation model. These solute parameters, referred to as Quantum Chemically estimated Abraham Parameters (QCAP), are further adjusted by fitting to experimentally based APs using QCAP parameters as the independent variables so that they are compatible with existing Abraham pp-LFERs. QCAP and adjusted QCAP for 1827 neutral chemicals are included. For 24 solvent-water systems including octanol-water, predicted log solvent-water partition coefficients using adjusted QCAP have the smallest root-mean-square errors (RMSEs, 0.314-0.602) compared to predictions made using APs estimated using the molecular fragment based method ABSOLV (0.45-0.716). For munition and munition-like compounds, adjusted QCAP has much lower RMSE (0.860) than does ABSOLV (4.45) which essentially fails for these compounds.

  1. High-efficiency acceleration in the laser wakefield by a linearly increasing plasma density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Kegong; Wu, Yuchi; Zhu, Bin

    The acceleration length and the peak energy of the electron beam are limited by the dephasing effect in the laser wakefield acceleration with uniform plasma density. Based on 2D-3V particle in cell simulations, the effects of a linearly increasing plasma density on the electron acceleration are investigated broadly. Comparing with the uniform plasma density, because of the prolongation of the acceleration length and the gradually increasing accelerating field due to the increasing plasma density, the electron beam energy is twice higher in moderate nonlinear wakefield regime. Because of the lower plasma density, the linearly increasing plasma density can also avoidmore » the dark current caused by additional injection. At the optimal acceleration length, the electron energy can be increased from 350 MeV (uniform) to 760 MeV (linearly increasing) with the energy spread of 1.8%, the beam duration is 5 fs and the beam waist is 1.25 μm. This linearly increasing plasma density distribution can be achieved by a capillary with special gas-filled structure, and is much more suitable for experiment.« less

  2. Electronic Structure Methods Based on Density Functional Theory

    DTIC Science & Technology

    2010-01-01

    0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...chapter in the ASM Handbook , Volume 22A: Fundamentals of Modeling for Metals Processing, 2010. PAO Case Number: 88ABW-2009-3258; Clearance Date: 16 Jul...are represented using a linear combination, or basis, of plane waves. Over time several methods were developed to avoid the large number of planewaves

  3. Impact of Sampling Density on the Extent of HIV Clustering

    PubMed Central

    Novitsky, Vlad; Moyo, Sikhulile; Lei, Quanhong; DeGruttola, Victor

    2014-01-01

    Abstract Identifying and monitoring HIV clusters could be useful in tracking the leading edge of HIV transmission in epidemics. Currently, greater specificity in the definition of HIV clusters is needed to reduce confusion in the interpretation of HIV clustering results. We address sampling density as one of the key aspects of HIV cluster analysis. The proportion of viral sequences in clusters was estimated at sampling densities from 1.0% to 70%. A set of 1,248 HIV-1C env gp120 V1C5 sequences from a single community in Botswana was utilized in simulation studies. Matching numbers of HIV-1C V1C5 sequences from the LANL HIV Database were used as comparators. HIV clusters were identified by phylogenetic inference under bootstrapped maximum likelihood and pairwise distance cut-offs. Sampling density below 10% was associated with stochastic HIV clustering with broad confidence intervals. HIV clustering increased linearly at sampling density >10%, and was accompanied by narrowing confidence intervals. Patterns of HIV clustering were similar at bootstrap thresholds 0.7 to 1.0, but the extent of HIV clustering decreased with higher bootstrap thresholds. The origin of sampling (local concentrated vs. scattered global) had a substantial impact on HIV clustering at sampling densities ≥10%. Pairwise distances at 10% were estimated as a threshold for cluster analysis of HIV-1 V1C5 sequences. The node bootstrap support distribution provided additional evidence for 10% sampling density as the threshold for HIV cluster analysis. The detectability of HIV clusters is substantially affected by sampling density. A minimal genotyping density of 10% and sampling density of 50–70% are suggested for HIV-1 V1C5 cluster analysis. PMID:25275430

  4. Online sequential Monte Carlo smoother for partially observed diffusion processes

    NASA Astrophysics Data System (ADS)

    Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain

    2018-12-01

    This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.

  5. Ionization effects and linear stability in a coaxial plasma device

    NASA Astrophysics Data System (ADS)

    Kurt, Erol; Kurt, Hilal; Bayhan, Ulku

    2009-03-01

    A 2-D computer simulation of a coaxial plasma device depending on the conservation equations of electrons, ions and excited atoms together with the Poisson equation for a plasma gun is carried out. Some characteristics of the plasma focus device (PF) such as critical wave numbers a c and voltages U c in the cases of various pressures Pare estimated in order to satisfy the necessary conditions of traveling particle densities ( i.e. plasma patterns) via a linear analysis. Oscillatory solutions are characterized by a nonzero imaginary part of the growth rate Im ( σ) for all cases. The model also predicts the minimal voltage ranges of the system for certain pressure intervals.

  6. Finite linear diffusion model for design of overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. The model has been experimentally verified using 1,1-prime-dimethylferrocene as a redox additive. The theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  7. Leaf-on canopy closure in broadleaf deciduous forests predicted during winter

    USGS Publications Warehouse

    Twedt, Daniel J.; Ayala, Andrea J.; Shickel, Madeline R.

    2015-01-01

    Forest canopy influences light transmittance, which in turn affects tree regeneration and survival, thereby having an impact on forest composition and habitat conditions for wildlife. Because leaf area is the primary impediment to light penetration, quantitative estimates of canopy closure are normally made during summer. Studies of forest structure and wildlife habitat that occur during winter, when deciduous trees have shed their leaves, may inaccurately estimate canopy closure. We estimated percent canopy closure during both summer (leaf-on) and winter (leaf-off) in broadleaf deciduous forests in Mississippi and Louisiana using gap light analysis of hemispherical photographs that were obtained during repeat visits to the same locations within bottomland and mesic upland hardwood forests and hardwood plantation forests. We used mixed-model linear regression to predict leaf-on canopy closure from measurements of leaf-off canopy closure, basal area, stem density, and tree height. Competing predictive models all included leaf-off canopy closure (relative importance = 0.93), whereas basal area and stem density, more traditional predictors of canopy closure, had relative model importance of ≤ 0.51.

  8. Indirect Validation of Probe Speed Data on Arterial Corridors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eshragh, Sepideh; Young, Stanley E.; Sharifi, Elham

    This study aimed to estimate the accuracy of probe speed data on arterial corridors on the basis of roadway geometric attributes and functional classification. It was assumed that functional class (medium and low) along with other road characteristics (such as weighted average of the annual average daily traffic, average signal density, average access point density, and average speed) were available as correlation factors to estimate the accuracy of probe traffic data. This study tested these factors as predictors of the fidelity of probe traffic data by using the results of an extensive validation exercise. This study showed strong correlations betweenmore » these geometric attributes and the accuracy of probe data when they were assessed by using average absolute speed error. Linear models were regressed to existing data to estimate appropriate models for medium- and low-type arterial corridors. The proposed models for medium- and low-type arterials were validated further on the basis of the results of a slowdown analysis. These models can be used to predict the accuracy of probe data indirectly in medium and low types of arterial corridors.« less

  9. Estimation of Enthalpy of Formation of Liquid Transition Metal Alloys: A Modified Prescription Based on Macroscopic Atom Model of Cohesion

    NASA Astrophysics Data System (ADS)

    Raju, Subramanian; Saibaba, Saroja

    2016-09-01

    The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.

  10. Relationship between symbiont density and photosynthetic carbon acquisition in the temperate coral Cladocora caespitosa

    NASA Astrophysics Data System (ADS)

    Hoogenboom, M.; Beraud, E.; Ferrier-Pagès, C.

    2010-03-01

    This study quantified variation in net photosynthetic carbon gain in response to natural fluctuations in symbiont density for the Mediterranean coral Cladocora caespitosa, and evaluated which density maximized photosynthetic carbon acquisition. To do this, carbon acquisition was modeled as an explicit function of symbiont density. The model was parameterized using measurements of rates of photosynthesis and respiration for small colonies with a broad range of zooxanthella concentrations. Results demonstrate that rates of net photosynthesis increase asymptotically with symbiont density, whereas rates of respiration increase linearly. In combination, these functional responses meant that colony energy acquisition decreased at both low and at very high zooxanthella densities. However, there was a wide range of symbiont densities for which net daily photosynthesis was approximately equivalent. Therefore, significant changes in symbiont density do not necessarily cause a change in autotrophic energy acquisition by the colony. Model estimates of the optimal range of cell densities corresponded well with independent observations of symbiont concentrations obtained from field and laboratory studies of healthy colonies. Overall, this study demonstrates that the seasonal fluctuations, in symbiont numbers observed in healthy colonies of the Mediterranean coral investigated, do not have a strong effect on photosynthetic energy acquisition.

  11. Relationships between brightness of nighttime lights and population density

    NASA Astrophysics Data System (ADS)

    Naizhuo, Z.

    2012-12-01

    Brightness of nighttime lights has been proven to be a good proxy for socioeconomic and demographic statistics. Moreover, the satellite nighttime lights data have been used to spatially disaggregate amounts of gross domestic product (GDP), fossil fuel carbon dioxide emission, and electric power consumption (Ghosh et al., 2010; Oda and Maksyutov, 2011; Zhao et al., 2012). Spatial disaggregations were performed in these previous studies based on assumed linear relationships between digital number (DN) value of pixels in the nighttime light images and socioeconomic data. However, reliability of the linear relationships was never tested due to lack of relative high-spatial-resolution (equal to or finer than 1 km × 1 km) statistical data. With the similar assumption that brightness linearly correlates to population, Bharti et al. (2011) used nighttime light data as a proxy for population density and then developed a model about seasonal fluctuations of measles in West Africa. The Oak Ridge National Laboratory used sub-national census population data and high spatial resolution remotely-sensed-images to produce LandScan population raster datasets. The LandScan population datasets have 1 km × 1 km spatial resolution which is consistent with the spatial resolution of the nighttime light images. Therefore, in this study I selected 2008 LandScan population data as baseline reference data and the contiguous United State as study area. Relationships between DN value of pixels in the 2008 Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS) stable light image and population density were established. Results showed that an exponential function can more accurately reflect the relationship between luminosity and population density than a linear function. Additionally, a certain number of saturated pixels with DN value of 63 exist in urban core areas. If directly using the exponential function to estimate the population density for the whole brightly lit area, relatively large under-estimations would emerge in the urban core regions. Previous studies have shown that GDP, carbon dioxide emission, and electric power consumption strongly correlate to urban population (Ghosh et al., 2010; Sutton et al., 2007; Zhao et al., 2012). Thus, although this study only examined the relationships between brightness of nighttime lights and population density, the results can provide insight for the spatial disaggregations of socioeconomic data (e.g. GDP, carbon dioxide emission, and electric power consumption) using the satellite nighttime light image data. Simply distributing the socioeconomic data to each pixel in proportion to the DN value of the nighttime light images may generate relatively large errors. References Bharit N, Tatem AJ, Ferrari MJ, Grais RF, Djibo A, Grenfell BT, 2011. Science, 334:1424-1427. Ghosh T, Elvidge CD, Sutton PC, Baugh KE, Ziskin D, Tuttle BT, 2010. Energies, 3:1895-1913. Oda T, Maksyutov S, 2011. Atmospheric Chemistry and Physics, 11:543-556. Sutton PC, Elvidge CD, Ghosh T, 2007. International Journal of Ecological Economics and Statistics, 8:5-21. Zhao N, Ghosh T, Samson EL, 2012. International Journal of Remote sensing, 33:6304-6320.

  12. Evaluating habitat for black-footed ferrets: Revision of an existing model

    USGS Publications Warehouse

    Biggins, Dean E.; Lockhart, J. Michael; Godbey, Jerry L.

    2006-01-01

    Black-footed ferrets (Mustela nigripes) are highly dependent on prairie dogs (Cynomys spp.) as prey, and prairie dog colonies are the only known habitats that sustain black-footed ferret populations. An existing model used extensively for evaluating black-footed ferret reintroduction habitat defined complexes by interconnecting colonies with 7-km line segments. Although the 7-km complex remains a useful construct, we propose additional, smaller-scale evaluations that consider 1.5-km subcomplexes. The original model estimated the carrying capacity of complexes based on energy requirements of ferrets and density estimates of their prairie dog prey. Recent data have supported earlier contentions of intraspecific competition and intrasexual territorial behavior in ferrets. We suggest a revised model that retains the fixed linear relationship of the existing model when prairie dog densities are <18/ha and uses a curvilinear relationship that reflects increasing effects of ferret territoriality when there are 18–42 prairie dogs per hectare. We discuss possible effects of colony size and shape, interacting with territoriality, as justification for the exclusion of territorial influences if a prairie dog colony supports only a single female ferret. We also present data to support continued use of active prairie dog burrow densities as indices suitable for broad-scale estimates of prairie dog density. Calculation of percent of complexes that are occupied by prairie dog colonies was recommended as part of the original habitat evaluation process. That attribute has been largely ignored, resulting in rating anomalies.

  13. Geometric contribution leading to anomalous estimation of two-dimensional electron gas density in GaN based heterostructures

    NASA Astrophysics Data System (ADS)

    Upadhyay, Bhanu B.; Jha, Jaya; Takhar, Kuldeep; Ganguly, Swaroop; Saha, Dipankar

    2018-05-01

    We have observed that the estimation of two-dimensional electron gas density is dependent on the device geometry. The geometric contribution leads to the anomalous estimation of the GaN based heterostructure properties. The observed discrepancy is found to originate from the anomalous area dependent capacitance of GaN based Schottky diodes, which is an integral part of the high electron mobility transistors. The areal capacitance density is found to increase for smaller radii Schottky diodes, contrary to a constant as expected intuitively. The capacitance is found to follow a second order polynomial on the radius of all the bias voltages and frequencies considered here. In addition to the quadratic dependency corresponding to the areal component, the linear dependency indicates a peripheral component. It is further observed that the peripheral to areal contribution is inversely proportional to the radius confirming the periphery as the location of the additional capacitance. The peripheral component is found to be frequency dependent and tends to saturate to a lower value for measurements at a high frequency. In addition, the peripheral component is found to vanish when the surface is passivated by a combination of N2 and O2 plasma treatments. The cumulative surface state density per unit length of the perimeter of the Schottky diodes as obtained by the integrated response over the distance between the ohmic and Schottky contacts is found to be 2.75 × 1010 cm-1.

  14. Growth and dissolution of spherical density enhancements in SCDEW cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonometto, Silvio A.; Mainini, Roberto, E-mail: bonometto@oats.inaf.it, E-mail: roberto.mainini@mib.infn.it

    2017-06-01

    Strongly Coupled Dark Energy plus Warm dark matter (SCDEW) cosmologies are based on the finding of a conformally invariant (CI) attractor solution during the early radiative expansion, requiring then the stationary presence of ∼ 1 % of coupled-DM and DE, since inflationary reheating. In these models, coupled-DM fluctuations, even in the early radiative expansion, grow up to non-linearity, as shown in a previous associated paper. Such early non-linear stages are modelized here through the evolution of a top-hat density enhancement. As expected, its radius R increases up to a maximum and then starts to decrease. Virial balance is reached whenmore » the coupled-DM density contrast is just 25–26 and DM density enhancement is O(10 %) of total density. Moreover, we find that this is not an equilibrium configuration as, afterwards, coupling causes DM particle velocities to increase, so that the fluctuation gradually dissolves. We estimate the duration of the whole process, from horizon crossing to dissolution, and find z {sub horizon}/ z {sub erasing} ∼ 3 × 10{sup 4}. Therefore, only fluctuations entering the horizon at z ∼< 10{sup 9}–10{sup 10} are able to accrete WDM with mass ∼ 100 eV—as soon as it becomes non-relativistic—so avoiding full disruption. Accordingly, SCDEW cosmologies, whose WDM has mass ∼ 100 eV, can preserve primeval fluctuations down to stellar mass scale.« less

  15. Mapping brucellosis increases relative to elk density using hierarchical Bayesian models

    USGS Publications Warehouse

    Cross, Paul C.; Heisey, Dennis M.; Scurlock, Brandon M.; Edwards, William H.; Brennan, Angela; Ebinger, Michael R.

    2010-01-01

    The relationship between host density and parasite transmission is central to the effectiveness of many disease management strategies. Few studies, however, have empirically estimated this relationship particularly in large mammals. We applied hierarchical Bayesian methods to a 19-year dataset of over 6400 brucellosis tests of adult female elk (Cervus elaphus) in northwestern Wyoming. Management captures that occurred from January to March were over two times more likely to be seropositive than hunted elk that were killed in September to December, while accounting for site and year effects. Areas with supplemental feeding grounds for elk had higher seroprevalence in 1991 than other regions, but by 2009 many areas distant from the feeding grounds were of comparable seroprevalence. The increases in brucellosis seroprevalence were correlated with elk densities at the elk management unit, or hunt area, scale (mean 2070 km2; range = [95–10237]). The data, however, could not differentiate among linear and non-linear effects of host density. Therefore, control efforts that focus on reducing elk densities at a broad spatial scale were only weakly supported. Additional research on how a few, large groups within a region may be driving disease dynamics is needed for more targeted and effective management interventions. Brucellosis appears to be expanding its range into new regions and elk populations, which is likely to further complicate the United States brucellosis eradication program. This study is an example of how the dynamics of host populations can affect their ability to serve as disease reservoirs.

  16. Cosmological Constraints from Fourier Phase Statistics

    NASA Astrophysics Data System (ADS)

    Ali, Kamran; Obreschkow, Danail; Howlett, Cullan; Bonvin, Camille; Llinares, Claudio; Oliveira Franco, Felipe; Power, Chris

    2018-06-01

    Most statistical inference from cosmic large-scale structure relies on two-point statistics, i.e. on the galaxy-galaxy correlation function (2PCF) or the power spectrum. These statistics capture the full information encoded in the Fourier amplitudes of the galaxy density field but do not describe the Fourier phases of the field. Here, we quantify the information contained in the line correlation function (LCF), a three-point Fourier phase correlation function. Using cosmological simulations, we estimate the Fisher information (at redshift z = 0) of the 2PCF, LCF and their combination, regarding the cosmological parameters of the standard ΛCDM model, as well as a Warm Dark Matter (WDM) model and the f(R) and Symmetron modified gravity models. The galaxy bias is accounted for at the level of a linear bias. The relative information of the 2PCF and the LCF depends on the survey volume, sampling density (shot noise) and the bias uncertainty. For a volume of 1h^{-3}Gpc^3, sampled with points of mean density \\bar{n} = 2× 10^{-3} h3 Mpc^{-3} and a bias uncertainty of 13%, the LCF improves the parameter constraints by about 20% in the ΛCDM cosmology and potentially even more in alternative models. Finally, since a linear bias only affects the Fourier amplitudes (2PCF), but not the phases (LCF), the combination of the 2PCF and the LCF can be used to break the degeneracy between the linear bias and σ8, present in 2-point statistics.

  17. Modelling long-term fire occurrence factors in Spain by accounting for local variations with geographically weighted regression

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, J.; Chuvieco, E.; Koutsias, N.

    2013-02-01

    Humans are responsible for most forest fires in Europe, but anthropogenic factors behind these events are still poorly understood. We tried to identify the driving factors of human-caused fire occurrence in Spain by applying two different statistical approaches. Firstly, assuming stationary processes for the whole country, we created models based on multiple linear regression and binary logistic regression to find factors associated with fire density and fire presence, respectively. Secondly, we used geographically weighted regression (GWR) to better understand and explore the local and regional variations of those factors behind human-caused fire occurrence. The number of human-caused fires occurring within a 25-yr period (1983-2007) was computed for each of the 7638 Spanish mainland municipalities, creating a binary variable (fire/no fire) to develop logistic models, and a continuous variable (fire density) to build standard linear regression models. A total of 383 657 fires were registered in the study dataset. The binary logistic model, which estimates the probability of having/not having a fire, successfully classified 76.4% of the total observations, while the ordinary least squares (OLS) regression model explained 53% of the variation of the fire density patterns (adjusted R2 = 0.53). Both approaches confirmed, in addition to forest and climatic variables, the importance of variables related with agrarian activities, land abandonment, rural population exodus and developmental processes as underlying factors of fire occurrence. For the GWR approach, the explanatory power of the GW linear model for fire density using an adaptive bandwidth increased from 53% to 67%, while for the GW logistic model the correctly classified observations improved only slightly, from 76.4% to 78.4%, but significantly according to the corrected Akaike Information Criterion (AICc), from 3451.19 to 3321.19. The results from GWR indicated a significant spatial variation in the local parameter estimates for all the variables and an important reduction of the autocorrelation in the residuals of the GW linear model. Despite the fitting improvement of local models, GW regression, more than an alternative to "global" or traditional regression modelling, seems to be a valuable complement to explore the non-stationary relationships between the response variable and the explanatory variables. The synergy of global and local modelling provides insights into fire management and policy and helps further our understanding of the fire problem over large areas while at the same time recognizing its local character.

  18. Digital Breast Tomosynthesis guided Near Infrared Spectroscopy: Volumetric estimates of fibroglandular fraction and breast density from tomosynthesis reconstructions

    PubMed Central

    Vedantham, Srinivasan; Shi, Linxi; Michaelsen, Kelly E.; Krishnaswamy, Venkataramanan; Pogue, Brian W.; Poplack, Steven P.; Karellas, Andrew; Paulsen, Keith D.

    2016-01-01

    A multimodality system combining a clinical prototype digital breast tomosynthesis with its imaging geometry modified to facilitate near-infrared spectroscopic imaging has been developed. The accuracy of parameters recovered from near-infrared spectroscopy is dependent on fibroglandular tissue content. Hence, in this study, volumetric estimates of fibroglandular tissue from tomosynthesis reconstructions were determined. A kernel-based fuzzy c-means algorithm was implemented to segment tomosynthesis reconstructed slices in order to estimate fibroglandular content and to provide anatomic priors for near-infrared spectroscopy. This algorithm was used to determine volumetric breast density (VBD), defined as the ratio of fibroglandular tissue volume to the total breast volume, expressed as percentage, from 62 tomosynthesis reconstructions of 34 study participants. For a subset of study participants who subsequently underwent mammography, VBD from mammography matched for subject, breast laterality and mammographic view was quantified using commercial software and statistically analyzed to determine if it differed from tomosynthesis. Summary statistics of the VBD from all study participants were compared with prior independent studies. The fibroglandular volume from tomosynthesis and mammography were not statistically different (p=0.211, paired t-test). After accounting for the compressed breast thickness, which were different between tomosynthesis and mammography, the VBD from tomosynthesis was correlated with (r =0.809, p<0.001), did not statistically differ from (p>0.99, paired t-test), and was linearly related to, the VBD from mammography. Summary statistics of the VBD from tomosynthesis were not statistically different from prior studies using high-resolution dedicated breast computed tomography. The observation of correlation and linear association in VBD between mammography and tomosynthesis suggests that breast density associated risk measures determined for mammography are translatable to tomosynthesis. Accounting for compressed breast thickness is important when it differs between the two modalities. The fibroglandular volume from tomosynthesis reconstructions is similar to mammography indicating suitability for use during near-infrared spectroscopy. PMID:26941961

  19. Moored observations of the Deep Western Boundary Current in the NW Atlantic: 2004-2014

    NASA Astrophysics Data System (ADS)

    Toole, John M.; Andres, Magdalena; Le Bras, Isabela A.; Joyce, Terrence M.; McCartney, Michael S.

    2017-09-01

    A moored array spanning the continental slope southeast of Cape Cod sampled the equatorward-flowing Deep Western Boundary Current (DWBC) for a 10 year period: May 2004 to May 2014. Daily profiles of subinertial velocity, temperature, salinity, and neutral density are constructed for each mooring site and cross-line DWBC transport time series are derived for specified water mass layers. Time-averaged transports based on daily estimates of the flow and density fields in Stream coordinates are contrasted with those derived from the Eulerian-mean flow field, modes of DWBC transport variability are investigated through compositing, and comparisons are made to transport estimates for other latitudes. Integrating the daily velocity estimates over the neutral density range of 27.8-28.125 kg/m3 (encompassing Labrador Sea and Overflow Water layers), a mean equatorward DWBC transport of 22.8 × 106 ± 1.9 × 106 m3/s is obtained. Notably, a statistically significant trend of decreasing equatorward transport is observed in several of the DWBC components as well as the current as a whole. The largest linear change (a 4% decrease per year) is seen in the layer of Labrador Sea Water that was renewed by deep convection in the early 1990s whose transport fell from 9.0 × 106 m3/s at the beginning of the field program to 5.8 × 106 m3/s at its end. The corresponding linear fit to the combined Labrador Sea and Overflow Water DWBC transport decreases from 26.4 × 106 to 19.1 × 106 m3/s. In contrast, no long-term trend is observed in upper ocean Slope Water transport. These trends are discussed in the context of decadal observations of the North Atlantic circulation, and subpolar air-sea interaction/water mass transformation.

  20. Breast density quantification with cone-beam CT: A post-mortem study

    PubMed Central

    Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee

    2014-01-01

    Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317

  1. Short-term response of Dicamptodon tenebrosus larvae to timber management in southwestern Oregon

    USGS Publications Warehouse

    Leuthold, Niels; Adams, Michael J.; Hayes, John P.

    2012-01-01

    In the Pacific Northwest, previous studies have found a negative effect of timber management on the abundance of stream amphibians, but results have been variable and region specific. These studies have generally used survey methods that did not account for differences in capture probability and focused on stands that were harvested under older management practices. We examined the influences of contemporary forest practices on larval Dicamptodon tenebrosus as part of the Hinkle Creek paired watershed study. We used a mark-recapture analysis to estimate D. tenebrosus density at 100 1-m sites spread throughout the basin and used extended linear models that accounted for correlation resulting from the repeated surveys at sites across years. Density was associated with substrate, but we found no evidence of an effect of harvest. While holding other factors constant, the model-averaged estimates indicated; 1) each 10% increase in small cobble or larger substrate increased median density of D. tenebrosus 1.05 times, 2) each 100-ha increase in the upstream area drained decreased median density of D. tenebrosus 0.96 times, and 3) increasing the fish density in the 40 m around a site by 0.01 increased median salamander density 1.01 times. Although this study took place in a single basin, it suggests that timber management in similar third-order basins of the southwestern Oregon Cascade foothills is unlikely to have short-term effects of D. tenebrosus larvae.

  2. Photo-Seebeck effect in tetragonal PbO single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondal, P. S.; Okazaki, R.; Taniguchi, H.

    2013-11-07

    We report the observation of photo-Seebeck effect in tetragonal PbO crystals. The photo-induced carriers contribute to the transport phenomena, and consequently the electrical conductivity increases and the Seebeck coefficient decreases with increasing photon flux density. A parallel-circuit model is used to evaluate the actual contributions of photo-excited carriers from the measured transport data. The photo-induced carrier concentration estimated from the Seebeck coefficient increases almost linearly with increasing photon flux density, indicating a successful photo-doping effect on the thermoelectric property. The mobility decreases by illumination but the reduction rate strongly depends on the illuminated photon energy. Possible mechanisms of such photon-energy-dependentmore » mobility are discussed.« less

  3. Measuring the charge density of a tapered optical fiber using trapped microparticles.

    PubMed

    Kamitani, Kazuhiko; Muranaka, Takuya; Takashima, Hideaki; Fujiwara, Masazumi; Tanaka, Utako; Takeuchi, Shigeki; Urabe, Shinji

    2016-03-07

    We report the measurements of charge density of tapered optical fibers using charged particles confined in a linear Paul trap at ambient pressure. A tapered optical fiber is placed across the trap axis at a right angle, and polystyrene microparticles are trapped along the trap axis. The distance between the equilibrium position of a positively charged particle and the tapered fiber is used to estimate the amount of charge per unit length of the fiber without knowing the amount of charge of the trapped particle. The charge per unit length of a tapered fiber with a diameter of 1.6 μm was measured to be 2-1+3×10 -11 C/m.

  4. Effect of horseshoe crab spawning density on nest disturbance and exhumation of eggs: A simulation study

    USGS Publications Warehouse

    Smith, D.R.

    2007-01-01

    Because the Delaware Bay horseshoe crab (Limulus polyphemus) population is managed to provide for dependent species, such as migratory shorebirds, there is a need to understand the process of egg exhumation and to predict eggs available to foraging shorebirds. A simple spatial model was used to simulate horseshoe crab spawning that would occur on a typical Delaware Bay beach during spring tide cycles to quantify density-dependent nest disturbance. At least 20% of nests and eggs were disturbed for levels of spawning greater than one third of the average density in Delaware Bay during 2004. Nest disturbance increased approximately linearly as spawning density increased from one half to twice the 2004 level. As spawning density increased further, the percentage of eggs that were disturbed reached an asymptote of 70% for densities up to 10 times the density in 2004. Nest disturbance was heaviest in the mid beach zone. Nest disturbance precedes entrainment and begins the process of exhumation of eggs to surface sediments. Model predictions were combined with observations from egg surveys to estimate a snap-shot exhumation rate of 5-9% of disturbed eggs. Because an unknown quantity of eggs were exhumed and removed from the beach prior to the survey, cumulative exhumation rate was likely to have been higher than the snap-shot estimate. Because egg exhumation is density-dependent, in addition to managing for a high population size, identification and conservation of beaches where spawning horseshoe crabs concentrate in high densities (i.e., hot spots) are important steps toward providing a reliable food supply for migratory shorebirds. ?? 2007 Estuarine Research Federation.

  5. Cell survival fraction estimation based on the probability densities of domain and cell nucleus specific energies using improved microdosimetric kinetic models.

    PubMed

    Sato, Tatsuhiko; Furusawa, Yoshiya

    2012-10-01

    Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.

  6. Impact of Many-Body Effects on Landau Levels in Graphene

    NASA Astrophysics Data System (ADS)

    Sonntag, J.; Reichardt, S.; Wirtz, L.; Beschoten, B.; Katsnelson, M. I.; Libisch, F.; Stampfer, C.

    2018-05-01

    We present magneto-Raman spectroscopy measurements on suspended graphene to investigate the charge carrier density-dependent electron-electron interaction in the presence of Landau levels. Utilizing gate-tunable magnetophonon resonances, we extract the charge carrier density dependence of the Landau level transition energies and the associated effective Fermi velocity vF. In contrast to the logarithmic divergence of vF at zero magnetic field, we find a piecewise linear scaling of vF as a function of the charge carrier density, due to a magnetic-field-induced suppression of the long-range Coulomb interaction. We quantitatively confirm our experimental findings by performing tight-binding calculations on the level of the Hartree-Fock approximation, which also allow us to estimate an excitonic binding energy of ≈6 meV contained in the experimentally extracted Landau level transitions energies.

  7. A multi-scale assessment of animal aggregation patterns to understand increasing pathogen seroprevalence

    USGS Publications Warehouse

    Brennan, Angela K.; Cross, Paul C.; Higgs, Megan D.; Edwards, W. Henry; Scurlock, Brandon M.; Creel, Scott

    2014-01-01

    Understanding how animal density is related to pathogen transmission is important to develop effective disease control strategies, but requires measuring density at a scale relevant to transmission. However, this is not straightforward or well-studied among large mammals with group sizes that range several orders of magnitude or aggregation patterns that vary across space and time. To address this issue, we examined spatial variation in elk (Cervus canadensis) aggregation patterns and brucellosis across 10 regions in the Greater Yellowstone Area where previous studies suggest the disease may be increasing. We hypothesized that rates of increasing brucellosis would be better related to the frequency of large groups than mean group size or population density, but we examined whether other measures of density would also explain rising seroprevalence. To do this, we measured wintering elk density and group size across multiple spatial and temporal scales from monthly aerial surveys. We used Bayesian hierarchical models and 20 years of serologic data to estimate rates of increase in brucellosis within the 10 regions, and to examine the linear relationships between these estimated rates of increase and multiple measures of aggregation. Brucellosis seroprevalence increased over time in eight regions (one region showed an estimated increase from 0.015 in 1991 to 0.26 in 2011), and these rates of increase were positively related to all measures of aggregation. The relationships were weaker when the analysis was restricted to areas where brucellosis was present for at least two years, potentially because aggregation was related to disease-establishment within a population. Our findings suggest that (1) group size did not explain brucellosis increases any better than population density and (2) some elk populations may have high densities with small groups or lower densities with large groups, but brucellosis is likely to increase in either scenario. In this case, any one control method such as reducing population density or group size may not be sufficient to reduce transmission. This study highlights the importance of examining the density-transmission relationship at multiple scales and across populations before broadly applying disease control strategies.

  8. Density conversion factor determined using a cone-beam computed tomography unit NewTom QR-DVT 9000.

    PubMed

    Lagravère, M O; Fang, Y; Carey, J; Toogood, R W; Packota, G V; Major, P W

    2006-11-01

    The purpose of this study was to determine a conversion coefficient for Hounsfield Units (HU) to material density (g cm(-3)) obtained from cone-beam computed tomography (CBCT-NewTom QR-DVT 9000) data. Six cylindrical models of materials with different densities were made and scanned using the NewTom QR-DVT 9000 Volume Scanner. The raw data were converted into DICOM format and analysed using Merge eFilm and AMIRA to determine the HU of different areas of the models. There was no significant difference (P = 0.846) between the HU given by each piece of software. A linear regression was performed using the density, rho (g cm(-3)), as the dependent variable in terms of the HU (H). The regression equation obtained was rho = 0.002H-0.381 with an R2 value of 0.986. The standard error of the estimation is 27.104 HU in the case of the Hounsfield Units and 0.064 g cm(-3) in the case of density. CBCT provides an effective option for determination of material density expressed as Hounsfield Units.

  9. A linear, separable two-parameter model for dual energy CT imaging of proton stopping power computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Dong, E-mail: radon.han@gmail.com; Williamson, Jeffrey F.; Siebers, Jeffrey V.

    2016-01-15

    Purpose: To evaluate the accuracy and robustness of a simple, linear, separable, two-parameter model (basis vector model, BVM) in mapping proton stopping powers via dual energy computed tomography (DECT) imaging. Methods: The BVM assumes that photon cross sections (attenuation coefficients) of unknown materials are linear combinations of the corresponding radiological quantities of dissimilar basis substances (i.e., polystyrene, CaCl{sub 2} aqueous solution, and water). The authors have extended this approach to the estimation of electron density and mean excitation energy, which are required parameters for computing proton stopping powers via the Bethe–Bloch equation. The authors compared the stopping power estimation accuracymore » of the BVM with that of a nonlinear, nonseparable photon cross section Torikoshi parametric fit model (VCU tPFM) as implemented by the authors and by Yang et al. [“Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues,” Phys. Med. Biol. 55, 1343–1362 (2010)]. Using an idealized monoenergetic DECT imaging model, proton ranges estimated by the BVM, VCU tPFM, and Yang tPFM were compared to International Commission on Radiation Units and Measurements (ICRU) published reference values. The robustness of the stopping power prediction accuracy of tissue composition variations was assessed for both of the BVM and VCU tPFM. The sensitivity of accuracy to CT image uncertainty was also evaluated. Results: Based on the authors’ idealized, error-free DECT imaging model, the root-mean-square error of BVM proton stopping power estimation for 175 MeV protons relative to ICRU reference values for 34 ICRU standard tissues is 0.20%, compared to 0.23% and 0.68% for the Yang and VCU tPFM models, respectively. The range estimation errors were less than 1 mm for the BVM and Yang tPFM models, respectively. The BVM estimation accuracy is not dependent on tissue type and proton energy range. The BVM is slightly more vulnerable to CT image intensity uncertainties than the tPFM models. Both the BVM and tPFM prediction accuracies were robust to uncertainties of tissue composition and independent of the choice of reference values. This reported accuracy does not include the impacts of I-value uncertainties and imaging artifacts and may not be achievable on current clinical CT scanners. Conclusions: The proton stopping power estimation accuracy of the proposed linear, separable BVM model is comparable to or better than that of the nonseparable tPFM models proposed by other groups. In contrast to the tPFM, the BVM does not require an iterative solving for effective atomic number and electron density at every voxel; this improves the computational efficiency of DECT imaging when iterative, model-based image reconstruction algorithms are used to minimize noise and systematic imaging artifacts of CT images.« less

  10. Exact hierarchical clustering in one dimension. [in universe

    NASA Technical Reports Server (NTRS)

    Williams, B. G.; Heavens, A. F.; Peacock, J. A.; Shandarin, S. F.

    1991-01-01

    The present adhesion model-based one-dimensional simulations of gravitational clustering have yielded bound-object catalogs applicable in tests of analytical approaches to cosmological structure formation. Attention is given to Press-Schechter (1974) type functions, as well as to their density peak-theory modifications and the two-point correlation function estimated from peak theory. The extent to which individual collapsed-object locations can be predicted by linear theory is significant only for objects of near-characteristic nonlinear mass.

  11. Detection of Powdery Mildew in Two Winter Wheat Plant Densities and Prediction of Grain Yield Using Canopy Hyperspectral Reflectance

    PubMed Central

    Cao, Xueren; Luo, Yong; Zhou, Yilin; Fan, Jieru; Xu, Xiangming; West, Jonathan S.; Duan, Xiayu; Cheng, Dengfa

    2015-01-01

    To determine the influence of plant density and powdery mildew infection of winter wheat and to predict grain yield, hyperspectral canopy reflectance of winter wheat was measured for two plant densities at Feekes growth stage (GS) 10.5.3, 10.5.4, and 11.1 in the 2009–2010 and 2010–2011 seasons. Reflectance in near infrared (NIR) regions was significantly correlated with disease index at GS 10.5.3, 10.5.4, and 11.1 at two plant densities in both seasons. For the two plant densities, the area of the red edge peak (Σdr 680–760 nm), difference vegetation index (DVI), and triangular vegetation index (TVI) were significantly correlated negatively with disease index at three GSs in two seasons. Compared with other parameters Σdr 680–760 nm was the most sensitive parameter for detecting powdery mildew. Linear regression models relating mildew severity to Σdr 680–760 nm were constructed at three GSs in two seasons for the two plant densities, demonstrating no significant difference in the slope estimates between the two plant densities at three GSs. Σdr 680–760 nm was correlated with grain yield at three GSs in two seasons. The accuracies of partial least square regression (PLSR) models were consistently higher than those of models based on Σdr 680760 nm for disease index and grain yield. PLSR can, therefore, provide more accurate estimation of disease index of wheat powdery mildew and grain yield using canopy reflectance. PMID:25815468

  12. Alcohol outlet density and assault: a spatial analysis.

    PubMed

    Livingston, Michael

    2008-04-01

    A large number of studies have found links between alcohol outlet densities and assault rates in local areas. This study tests a variety of specifications of this link, focusing in particular on the possibility of a non-linear relationship. Cross-sectional data on police-recorded assaults during high alcohol hours, liquor outlets and socio-demographic characteristics were obtained for 223 postcodes in Melbourne, Australia. These data were used to construct a series of models testing the nature of the relationship between alcohol outlet density and assault, while controlling for socio-demographic factors and spatial auto-correlation. Four types of relationship were examined: a normal linear relationship between outlet density and assault, a non-linear relationship with potential threshold or saturation densities, a relationship mediated by the socio-economic status of the neighbourhood and a relationship which takes into account the effect of outlets in surrounding neighbourhoods. The model positing non-linear relationships between outlet density and assaults was found to fit the data most effectively. An increasing accelerating effect for the density of hotel (pub) licences was found, suggesting a plausible upper limit for these licences in Melbourne postcodes. The study finds positive relationships between outlet density and assault rates and provides evidence that this relationship is non-linear and thus has critical values at which licensing policy-makers can impose density limits.

  13. The vanishing limit of the square-well fluid: The adhesive hard-sphere model as a reference system

    NASA Astrophysics Data System (ADS)

    Largo, J.; Miller, M. A.; Sciortino, F.

    2008-04-01

    We report a simulation study of the gas-liquid critical point for the square-well potential, for values of well width δ as small as 0.005 times the particle diameter σ. For small δ, the reduced second virial coefficient at the critical point B2*c is found to depend linearly on δ. The observed weak linear dependence is not sufficient to produce any significant observable effect if the critical temperature Tc is estimated via a constant B2*c assumption, due to the highly nonlinear transformation between B2*c and Tc. This explains the previously observed validity of the law of corresponding states. The critical density ρc is also found to be constant when measured in units of the cube of the average distance between two bonded particles (1+0.5δ)σ. The possibility of describing the δ →0 dependence with precise functional forms provides improved accurate estimates of the critical parameters of the adhesive hard-sphere model.

  14. The vanishing limit of the square-well fluid: the adhesive hard-sphere model as a reference system.

    PubMed

    Largo, J; Miller, M A; Sciortino, F

    2008-04-07

    We report a simulation study of the gas-liquid critical point for the square-well potential, for values of well width delta as small as 0.005 times the particle diameter sigma. For small delta, the reduced second virial coefficient at the critical point B2*c is found to depend linearly on delta. The observed weak linear dependence is not sufficient to produce any significant observable effect if the critical temperature Tc is estimated via a constant B2*c assumption, due to the highly nonlinear transformation between B2*c and Tc. This explains the previously observed validity of the law of corresponding states. The critical density rho c is also found to be constant when measured in units of the cube of the average distance between two bonded particles (1+0.5 delta)sigma. The possibility of describing the delta-->0 dependence with precise functional forms provides improved accurate estimates of the critical parameters of the adhesive hard-sphere model.

  15. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  16. Estimating and Predicting Metal Concentration Using Online Turbidity Values and Water Quality Models in Two Rivers of the Taihu Basin, Eastern China

    PubMed Central

    Yao, Hong; Zhuang, Wei; Qian, Yu; Xia, Bisheng; Yang, Yang; Qian, Xin

    2016-01-01

    Turbidity (T) has been widely used to detect the occurrence of pollutants in surface water. Using data collected from January 2013 to June 2014 at eleven sites along two rivers feeding the Taihu Basin, China, the relationship between the concentration of five metals (aluminum (Al), titanium (Ti), nickel (Ni), vanadium (V), lead (Pb)) and turbidity was investigated. Metal concentration was determined using inductively coupled plasma mass spectrometry (ICP-MS). The linear regression of metal concentration and turbidity provided a good fit, with R2 = 0.86–0.93 for 72 data sets collected in the industrial river and R2 = 0.60–0.85 for 60 data sets collected in the cleaner river. All the regression presented good linear relationship, leading to the conclusion that the occurrence of the five metals are directly related to suspended solids, and these metal concentration could be approximated using these regression equations. Thus, the linear regression equations were applied to estimate the metal concentration using online turbidity data from January 1 to June 30 in 2014. In the prediction, the WASP 7.5.2 (Water Quality Analysis Simulation Program) model was introduced to interpret the transport and fates of total suspended solids; in addition, metal concentration downstream of the two rivers was predicted. All the relative errors between the estimated and measured metal concentration were within 30%, and those between the predicted and measured values were within 40%. The estimation and prediction process of metals’ concentration indicated that exploring the relationship between metals and turbidity values might be one effective technique for efficient estimation and prediction of metal concentration to facilitate better long-term monitoring with high temporal and spatial density. PMID:27028017

  17. Estimating and Predicting Metal Concentration Using Online Turbidity Values and Water Quality Models in Two Rivers of the Taihu Basin, Eastern China.

    PubMed

    Yao, Hong; Zhuang, Wei; Qian, Yu; Xia, Bisheng; Yang, Yang; Qian, Xin

    2016-01-01

    Turbidity (T) has been widely used to detect the occurrence of pollutants in surface water. Using data collected from January 2013 to June 2014 at eleven sites along two rivers feeding the Taihu Basin, China, the relationship between the concentration of five metals (aluminum (Al), titanium (Ti), nickel (Ni), vanadium (V), lead (Pb)) and turbidity was investigated. Metal concentration was determined using inductively coupled plasma mass spectrometry (ICP-MS). The linear regression of metal concentration and turbidity provided a good fit, with R(2) = 0.86-0.93 for 72 data sets collected in the industrial river and R(2) = 0.60-0.85 for 60 data sets collected in the cleaner river. All the regression presented good linear relationship, leading to the conclusion that the occurrence of the five metals are directly related to suspended solids, and these metal concentration could be approximated using these regression equations. Thus, the linear regression equations were applied to estimate the metal concentration using online turbidity data from January 1 to June 30 in 2014. In the prediction, the WASP 7.5.2 (Water Quality Analysis Simulation Program) model was introduced to interpret the transport and fates of total suspended solids; in addition, metal concentration downstream of the two rivers was predicted. All the relative errors between the estimated and measured metal concentration were within 30%, and those between the predicted and measured values were within 40%. The estimation and prediction process of metals' concentration indicated that exploring the relationship between metals and turbidity values might be one effective technique for efficient estimation and prediction of metal concentration to facilitate better long-term monitoring with high temporal and spatial density.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobson, Paul T; Hagerman, George; Scott, George

    This project estimates the naturally available and technically recoverable U.S. wave energy resources, using a 51-month Wavewatch III hindcast database developed especially for this study by National Oceanographic and Atmospheric Administration's (NOAA's) National Centers for Environmental Prediction. For total resource estimation, wave power density in terms of kilowatts per meter is aggregated across a unit diameter circle. This approach is fully consistent with accepted global practice and includes the resource made available by the lateral transfer of wave energy along wave crests, which enables wave diffraction to substantially reestablish wave power densities within a few kilometers of a linear array,more » even for fixed terminator devices. The total available wave energy resource along the U.S. continental shelf edge, based on accumulating unit circle wave power densities, is estimated to be 2,640 TWh/yr, broken down as follows: 590 TWh/yr for the West Coast, 240 TWh/yr for the East Coast, 80 TWh/yr for the Gulf of Mexico, 1570 TWh/yr for Alaska, 130 TWh/yr for Hawaii, and 30 TWh/yr for Puerto Rico. The total recoverable wave energy resource, as constrained by an array capacity packing density of 15 megawatts per kilometer of coastline, with a 100-fold operating range between threshold and maximum operating conditions in terms of input wave power density available to such arrays, yields a total recoverable resource along the U.S. continental shelf edge of 1,170 TWh/yr, broken down as follows: 250 TWh/yr for the West Coast, 160 TWh/yr for the East Coast, 60 TWh/yr for the Gulf of Mexico, 620 TWh/yr for Alaska, 80 TWh/yr for Hawaii, and 20 TWh/yr for Puerto Rico.« less

  19. Redshift-space distortions around voids

    NASA Astrophysics Data System (ADS)

    Cai, Yan-Chuan; Taylor, Andy; Peacock, John A.; Padilla, Nelson

    2016-11-01

    We have derived estimators for the linear growth rate of density fluctuations using the cross-correlation function (CCF) of voids and haloes in redshift space. In linear theory, this CCF contains only monopole and quadrupole terms. At scales greater than the void radius, linear theory is a good match to voids traced out by haloes; small-scale random velocities are unimportant at these radii, only tending to cause small and often negligible elongation of the CCF near its origin. By extracting the monopole and quadrupole from the CCF, we measure the linear growth rate without prior knowledge of the void profile or velocity dispersion. We recover the linear growth parameter β to 9 per cent precision from an effective volume of 3( h-1Gpc)3 using voids with radius >25 h-1Mpc. Smaller voids are predominantly sub-voids, which may be more sensitive to the random velocity dispersion; they introduce noise and do not help to improve measurements. Adding velocity dispersion as a free parameter allows us to use information at radii as small as half of the void radius. The precision on β is reduced to 5 per cent. Voids show diverse shapes in redshift space, and can appear either elongated or flattened along the line of sight. This can be explained by the competing amplitudes of the local density contrast, plus the radial velocity profile and its gradient. The distortion pattern is therefore determined solely by the void profile and is different for void-in-cloud and void-in-void. This diversity of redshift-space void morphology complicates measurements of the Alcock-Paczynski effect using voids.

  20. Bounding the moment deficit rate on crustal faults using geodetic data: Methods

    DOE PAGES

    Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael

    2017-08-19

    Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less

  1. Bounding the moment deficit rate on crustal faults using geodetic data: Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael

    Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less

  2. Learning to Estimate Dynamical State with Probabilistic Population Codes.

    PubMed

    Makin, Joseph G; Dichter, Benjamin K; Sabes, Philip N

    2015-11-01

    Tracking moving objects, including one's own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, "probabilistic population codes." We show that a recurrent neural network-a modified form of an exponential family harmonium (EFH)-that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states.

  3. Learning to Estimate Dynamical State with Probabilistic Population Codes

    PubMed Central

    Sabes, Philip N.

    2015-01-01

    Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states. PMID:26540152

  4. Numerical evaluation of static-chamber measurements of soil-atmospheric gas exchange--Identification of physical processes

    USGS Publications Warehouse

    Healy, Richard W.; Striegl, Robert G.; Russell, Thomas F.; Hutchinson, Gordon L.; Livingston, Gerald P.

    1996-01-01

    The exchange of gases between soil and atmosphere is an important process that affects atmospheric chemistry and therefore climate. The static-chamber method is the most commonly used technique for estimating the rate of that exchange. We examined the method under hypothetical field conditions where diffusion was the only mechanism for gas transport and the atmosphere outside the chamber was maintained at a fixed concentration. Analytical and numerical solutions to the soil gas diffusion equation in one and three dimensions demonstrated that gas flux density to a static chamber deployed on the soil surface was less in magnitude than the ambient exchange rate in the absence of the chamber. This discrepancy, which increased with chamber deployment time and air-filled porosity of soil, is attributed to two physical factors: distortion of the soil gas concentration gradient (the magnitude was decreased in the vertical component and increased in the radial component) and the slow transport rate of diffusion relative to mixing within the chamber. Instantaneous flux density to a chamber decreased continuously with time; steepest decreases occurred so quickly following deployment and in response to such slight changes in mean chamber headspace concentration that they would likely go undetected by most field procedures. Adverse influences of these factors were reduced by mixing the chamber headspace, minimizing deployment time, maximizing the height and radius of the chamber, and pushing the rim of the chamber into the soil. Nonlinear models were superior to a linear regression model for estimating flux densities from mean headspace concentrations, suggesting that linearity of headspace concentration with time was not necessarily a good indicator of measurement accuracy.

  5. Density and habitat of breeding Swallow-tailed Kites in the lower Suwannee ecosystem, Florida

    USGS Publications Warehouse

    Sykes, P.W.; Kepler, C.B.; Litzenberger, K.L.; Sansing, H.R.; Lewis, E.T.R.; Hatfield, J.S.

    1999-01-01

    Historically the Swallow-tailed Kite (Elanoides forficatus) bred in the United States in at least 16 eastern states. Currently it is restricted to seven southeastern states, with most of its breeding range in Florida. Breeding Bird Surveys indicate a declining trend for this Neotropical migrant in most of Florida. Using a rapid survey technique at the Lower Suwannee NWR on 25-27 Mar. 1997, we scanned for kites from 16 sampling stations above the forest canopy, using 10X binoculars for 45 min per station. An effective detection distance of 2.4 km provided almost complete coverage of kite habitat (excluding salt marsh) on the refuge (14,620 ha) and in a 1.6-km buffer (13,526 ha). A mobile observation platform, extended to heights of 30-34 m provided an unobstructed view above the forest canopy where foraging bouts, feeding, courtship displays, and other activities by this species occur. This technique was found to be efficient in obtaining an estimate of potential breeding pairs. An estimated 19 breeding pairs were observed, with possibly five additional pairs, a density of at least one pair per 1173-1407 ha. There was no opportunity to search for nests so we were unable to correlate number of active nests with the number of kites observed, and linear nature of study area might concentrate birds, including nonbreeders, so our density of kites may or may not be typical for other areas. The refuge has a mosaic of 11 different habitats (7 forest types, freshwater and salt marshes, open water and urban/suburban) providing much linear edge to the matrix of different plant communities that range in height from less than 1 m to greater than 30 m. Such structure provides quality habitat for Swallow-tailed Kites.

  6. The Usability of Noise Level from Rock Cutting for the Prediction of Physico-Mechanical Properties of Rocks

    NASA Astrophysics Data System (ADS)

    Delibalta, M. S.; Kahraman, S.; Comakli, R.

    2015-11-01

    Because the indirect tests are easier and cheaper than the direct tests, the prediction of rock properties from the indirect testing methods is important especially for the preliminary investigations. In this study, the predictability of the physico-mechanical rock properties from the noise level measured during cutting rock with diamond saw was investigated. Noise measurement test, uniaxial compressive strength (UCS) test, Brazilian tensile strength (BTS) test, point load strength (Is) test, density test, and porosity test were carried out on 54 different rock types in the laboratory. The results were statistically analyzed to derive estimation equations. Strong correlations between the noise level and the mechanical rock properties were found. The relations follow power functions. Increasing rock strength increases the noise level. Density and porosity also correlated strongly with the noise level. The relations follow linear functions. Increasing density increases the noise level while increasing porosity decreases the noise level. The developed equations are valid for the rocks with a compressive strength below 150 MPa. Concluding remark is that the physico-mechanical rock properties can reliably be estimated from the noise level measured during cutting the rock with diamond saw.

  7. Blind beam-hardening correction from Poisson measurements

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  8. Urban climate modifies tree growth in Berlin

    NASA Astrophysics Data System (ADS)

    Dahlhausen, Jens; Rötzer, Thomas; Biber, Peter; Uhl, Enno; Pretzsch, Hans

    2017-12-01

    Climate, e.g., air temperature and precipitation, differs strongly between urban and peripheral areas, which causes diverse life conditions for trees. In order to compare tree growth, we sampled in total 252 small-leaved lime trees (Tilia cordata Mill) in the city of Berlin along a gradient from the city center to the surroundings. By means of increment cores, we are able to trace back their growth for the last 50 to 100 years. A general growth trend can be shown by comparing recent basal area growth with estimates from extrapolating a growth function that had been fitted with growth data from earlier years. Estimating a linear model, we show that air temperature and precipitation significantly influence tree growth within the last 20 years. Under consideration of housing density, the results reveal that higher air temperature and less precipitation led to higher growth rates in high-dense areas, but not in low-dense areas. In addition, our data reveal a significantly higher variance of the ring width index in areas with medium housing density compared to low housing density, but no temporal trend. Transferring the results to forest stands, climate change is expected to lead to higher tree growth rates.

  9. Urban climate modifies tree growth in Berlin

    NASA Astrophysics Data System (ADS)

    Dahlhausen, Jens; Rötzer, Thomas; Biber, Peter; Uhl, Enno; Pretzsch, Hans

    2018-05-01

    Climate, e.g., air temperature and precipitation, differs strongly between urban and peripheral areas, which causes diverse life conditions for trees. In order to compare tree growth, we sampled in total 252 small-leaved lime trees ( Tilia cordata Mill) in the city of Berlin along a gradient from the city center to the surroundings. By means of increment cores, we are able to trace back their growth for the last 50 to 100 years. A general growth trend can be shown by comparing recent basal area growth with estimates from extrapolating a growth function that had been fitted with growth data from earlier years. Estimating a linear model, we show that air temperature and precipitation significantly influence tree growth within the last 20 years. Under consideration of housing density, the results reveal that higher air temperature and less precipitation led to higher growth rates in high-dense areas, but not in low-dense areas. In addition, our data reveal a significantly higher variance of the ring width index in areas with medium housing density compared to low housing density, but no temporal trend. Transferring the results to forest stands, climate change is expected to lead to higher tree growth rates.

  10. Urban climate modifies tree growth in Berlin.

    PubMed

    Dahlhausen, Jens; Rötzer, Thomas; Biber, Peter; Uhl, Enno; Pretzsch, Hans

    2018-05-01

    Climate, e.g., air temperature and precipitation, differs strongly between urban and peripheral areas, which causes diverse life conditions for trees. In order to compare tree growth, we sampled in total 252 small-leaved lime trees (Tilia cordata Mill) in the city of Berlin along a gradient from the city center to the surroundings. By means of increment cores, we are able to trace back their growth for the last 50 to 100 years. A general growth trend can be shown by comparing recent basal area growth with estimates from extrapolating a growth function that had been fitted with growth data from earlier years. Estimating a linear model, we show that air temperature and precipitation significantly influence tree growth within the last 20 years. Under consideration of housing density, the results reveal that higher air temperature and less precipitation led to higher growth rates in high-dense areas, but not in low-dense areas. In addition, our data reveal a significantly higher variance of the ring width index in areas with medium housing density compared to low housing density, but no temporal trend. Transferring the results to forest stands, climate change is expected to lead to higher tree growth rates.

  11. Estimation of effective hydrologic properties of soils from observations of vegetation density. M.S. Thesis; [water balance of watersheds in Clinton, Maine and Santa Paula, California

    NASA Technical Reports Server (NTRS)

    Tellers, T. E.

    1980-01-01

    An existing one-dimensional model of the annual water balance is reviewed. Slight improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate-soil system, is verified through comparisons with observed data and is employed in the annual water balance of watersheds in Clinton, Ma., and Santa Paula, Ca., to estimate effective areal average soil properties. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides excellent verification of the soil-selection procedure. This method of parameterization of the land surface should be useful with present global circulation models, enabling them to account for both the non-linearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.

  12. An adjoint-based method for a linear mechanically-coupled tumor model: application to estimate the spatial variation of murine glioma growth based on diffusion weighted magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Feng, Xinzeng; Hormuth, David A.; Yankeelov, Thomas E.

    2018-06-01

    We present an efficient numerical method to quantify the spatial variation of glioma growth based on subject-specific medical images using a mechanically-coupled tumor model. The method is illustrated in a murine model of glioma in which we consider the tumor as a growing elastic mass that continuously deforms the surrounding healthy-appearing brain tissue. As an inverse parameter identification problem, we quantify the volumetric growth of glioma and the growth component of deformation by fitting the model predicted cell density to the cell density estimated using the diffusion-weighted magnetic resonance imaging data. Numerically, we developed an adjoint-based approach to solve the optimization problem. Results on a set of experimentally measured, in vivo rat glioma data indicate good agreement between the fitted and measured tumor area and suggest a wide variation of in-plane glioma growth with the growth-induced Jacobian ranging from 1.0 to 6.0.

  13. Comparison of Modeling Methods to Determine Liver-to-blood Inocula and Parasite Multiplication Rates During Controlled Human Malaria Infection

    PubMed Central

    Douglas, Alexander D.; Edwards, Nick J.; Duncan, Christopher J. A.; Thompson, Fiona M.; Sheehy, Susanne H.; O'Hara, Geraldine A.; Anagnostou, Nicholas; Walther, Michael; Webster, Daniel P.; Dunachie, Susanna J.; Porter, David W.; Andrews, Laura; Gilbert, Sarah C.; Draper, Simon J.; Hill, Adrian V. S.; Bejon, Philip

    2013-01-01

    Controlled human malaria infection is used to measure efficacy of candidate malaria vaccines before field studies are undertaken. Mathematical modeling using data from quantitative polymerase chain reaction (qPCR) parasitemia monitoring can discriminate between vaccine effects on the parasite's liver and blood stages. Uncertainty regarding the most appropriate modeling method hinders interpretation of such trials. We used qPCR data from 267 Plasmodium falciparum infections to compare linear, sine-wave, and normal-cumulative-density-function models. We find that the parameters estimated by these models are closely correlated, and their predictive accuracy for omitted data points was similar. We propose that future studies include the linear model. PMID:23570846

  14. Sintering behavior and mechanical properties of zirconia compacts fabricated by uniaxial press forming.

    PubMed

    Oh, Gye-Jeong; Yun, Kwi-Dug; Lee, Kwang-Min; Lim, Hyun-Pil; Park, Sang-Won

    2010-09-01

    The purpose of this study was to compare the linear sintering behavior of presintered zirconia blocks of various densities. The mechanical properties of the resulting sintered zirconia blocks were then analyzed. Three experimental groups of dental zirconia blocks, with a different presintering density each, were designed in the present study. Kavo Everest® ZS blanks (Kavo, Biberach, Germany) were used as a control group. The experimental group blocks were fabricated from commercial yttria-stabilized tetragonal zirconia powder (KZ-3YF (SD) Type A, KCM. Corporation, Nagoya, Japan). The biaxial flexural strengths, microhardnesses, and microstructures of the sintered blocks were then investigated. The linear sintering shrinkages of blocks were calculated and compared. Despite their different presintered densities, the sintered blocks of the control and experimental groups showed similar mechanical properties. However, the sintered block had different linear sintering shrinkage rate depending on the density of the presintered block. As the density of the presintered block increased, the linear sintering shrinkage decreased. In the experimental blocks, the three sectioned pieces of each block showed the different linear shrinkage depending on the area. The tops of the experimental blocks showed the lowest linear sintering shrinkage, whereas the bottoms of the experimental blocks showed the highest linear sintering shrinkage. Within the limitations of this study, the density difference of the presintered zirconia block did not affect the mechanical properties of the sintered zirconia block, but affected the linear sintering shrinkage of the zirconia block.

  15. Sintering behavior and mechanical properties of zirconia compacts fabricated by uniaxial press forming

    PubMed Central

    Oh, Gye-Jeong; Yun, Kwi-Dug; Lee, Kwang-Min; Lim, Hyun-Pil

    2010-01-01

    PURPOSE The purpose of this study was to compare the linear sintering behavior of presintered zirconia blocks of various densities. The mechanical properties of the resulting sintered zirconia blocks were then analyzed. MATERIALS AND METHODS Three experimental groups of dental zirconia blocks, with a different presintering density each, were designed in the present study. Kavo Everest® ZS blanks (Kavo, Biberach, Germany) were used as a control group. The experimental group blocks were fabricated from commercial yttria-stabilized tetragonal zirconia powder (KZ-3YF (SD) Type A, KCM. Corporation, Nagoya, Japan). The biaxial flexural strengths, microhardnesses, and microstructures of the sintered blocks were then investigated. The linear sintering shrinkages of blocks were calculated and compared. RESULTS Despite their different presintered densities, the sintered blocks of the control and experimental groups showed similar mechanical properties. However, the sintered block had different linear sintering shrinkage rate depending on the density of the presintered block. As the density of the presintered block increased, the linear sintering shrinkage decreased. In the experimental blocks, the three sectioned pieces of each block showed the different linear shrinkage depending on the area. The tops of the experimental blocks showed the lowest linear sintering shrinkage, whereas the bottoms of the experimental blocks showed the highest linear sintering shrinkage. CONCLUSION Within the limitations of this study, the density difference of the presintered zirconia block did not affect the mechanical properties of the sintered zirconia block, but affected the linear sintering shrinkage of the zirconia block. PMID:21165274

  16. The association between the density of retail tobacco outlets, individual smoking status, neighbourhood socioeconomic status and school locations in New South Wales, Australia.

    PubMed

    Marashi-Pour, Sadaf; Cretikos, Michelle; Lyons, Claudine; Rose, Nick; Jalaludin, Bin; Smith, Joanne

    2015-01-01

    We explored the association between the density of tobacco outlets and neighbourhood socioeconomic status, and between neighbourhood tobacco outlet density and individual smoking status. We also investigated the density of tobacco outlets around primary and secondary schools in New South Wales (NSW). We calculated the mean density of retail tobacco outlets registered in NSW between 2009 and 2011, using kernel density estimation with an adaptive bandwidth. We used generalised ordered logistic regression model to explore the association between socioeconomic status and density of tobacco outlets. The association between neighbourhood tobacco outlet density and individuals' current smoking status was investigated using random-intercept generalised linear mixed models. We also calculated the median tobacco outlet density around NSW schools. More disadvantaged Census Collection Districts (CDs) were significantly more likely to have higher tobacco outlet densities. After adjusting for neighbourhood socioeconomic status and participants' age, sex, country of birth and Aboriginal status, neighbourhood mean tobacco outlet density was significantly and positively associated with individuals' smoking status. The median of tobacco outlet density around schools was significantly higher than the state median. Policymakers could consider exploring a range of strategies that target tobacco outlets in proximity to schools, in more disadvantaged neighbourhoods and in areas of existing high tobacco outlet density. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  17. Diffusive charge transport in graphene on SiO 2

    NASA Astrophysics Data System (ADS)

    Chen, J.-H.; Jang, C.; Ishigami, M.; Xiao, S.; Cullen, W. G.; Williams, E. D.; Fuhrer, M. S.

    2009-07-01

    We review our recent work on the physical mechanisms limiting the mobility of graphene on SiO 2. We have used intentional addition of charged scattering impurities and systematic variation of the dielectric environment to differentiate the effects of charged impurities and short-range scatterers. The results show that charged impurities indeed lead to a conductivity linear in density ( σ(n)∝n) in graphene, with a scattering magnitude that agrees quantitatively with theoretical estimates; increased dielectric screening reduces the scattering from charged impurities, but increases the scattering from short-range scatterers. We evaluate the effects of the corrugations (ripples) of graphene on SiO 2 on transport by measuring the height-height correlation function. The results show that the corrugations cannot mimic long-range (charged impurity) scattering effects, and have too small an amplitude-to-wavelength ratio to significantly affect the observed mobility via short-range scattering. Temperature-dependent measurements show that longitudinal acoustic phonons in graphene produce a resistivity that is linear in temperature and independent of carrier density; at higher temperatures, polar optical phonons of the SiO 2 substrate give rise to an activated, carrier density-dependent resistivity. Together the results paint a complete picture of charge carrier transport in graphene on SiO 2 in the diffusive regime.

  18. A density matrix-based method for the linear-scaling calculation of dynamic second- and third-order properties at the Hartree-Fock and Kohn-Sham density functional theory levels.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-11-28

    A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.

  19. Image and in situ data integration to derive sawgrass density for surface flow modelling in the Everglades, Florida, USA

    USGS Publications Warehouse

    Jones, J.W.

    2000-01-01

    The US Geological Survey is building models of the Florida Everglades to be used in managing south Florida surface water flows for habitat restoration and maintenance. Because of the low gradients in the Everglades, vegetation structural characteristics are very important and greatly influence surface water flow and distribution. Vegetation density is being evaluated as an index of surface resistance to flow. Digital multispectral videography (DMSV) has been captured over several sites just before field collection of vegetation data. Linear regression has been used to establish a relationship between normalized difference vegetation index (NDVI) values computed from the DMSV and field-collected biomass and density estimates. Spatial analysis applied to the DMSV data indicates that thematic mapper (TM) resolution is at the limit required to capture land surface heterogeneity. The TM data collected close to the time of the DMSV will be used to derive a regional sawgrass density map.

  20. Image and in situ data integration to derive sawgrass density for surface flow modelling in the Everglades, Florida, USA

    USGS Publications Warehouse

    Jones, J.W.

    2001-01-01

    The US Geological Survey is building models of the Florida Everglades to be used in managing south Florida surface water flows for habitat restoration and maintenance. Because of the low gradients in the Everglades, vegetation structural characteristics are very important and greatly influence surface water flow and distribution. Vegetation density is being evaluated as an index of surface resistance to flow. Digital multispectral videography (DMSV) has been captured over several sites just before field collection of vegetation data. Linear regression has been used to establish a relationship between normalized difference vegetation index (NDVI) values computed from the DMSV and field-collected biomass and density estimates. Spatial analysis applied to the DMSV data indicates that thematic mapper (TM) resolution is at the limit required to capture land surface heterogeneity. The TM data collected close to the time of the DMSV will be used to derive a regional sawgrass density map.

  1. Density dependence governs when population responses to multiple stressors are magnified or mitigated.

    PubMed

    Hodgson, Emma E; Essington, Timothy E; Halpern, Benjamin S

    2017-10-01

    Population endangerment typically arises from multiple, potentially interacting anthropogenic stressors. Extensive research has investigated the consequences of multiple stressors on organisms, frequently focusing on individual life stages. Less is known about population-level consequences of exposure to multiple stressors, especially when exposure varies through life. We provide the first theoretical basis for identifying species at risk of magnified effects from multiple stressors across life history. By applying a population modeling framework, we reveal conditions under which population responses from stressors applied to distinct life stages are either magnified (synergistic) or mitigated. We find that magnification or mitigation critically depends on the shape of density dependence, but not the life stage in which it occurs. Stressors are always magnified when density dependence is linear or concave, and magnified or mitigated when it is convex. Using Bayesian numerical methods, we estimated the shape of density dependence for eight species across diverse taxa, finding support for all three shapes. © 2017 by the Ecological Society of America.

  2. Population age and initial density in a patchy environment affect the occurrence of abrupt transitions in a birth-and-death model of Taylor's law

    USGS Publications Warehouse

    Jiang, Jiang; DeAngelis, Donald L.; Zhang, B.; Cohen, J.E.

    2014-01-01

    Taylor's power law describes an empirical relationship between the mean and variance of population densities in field data, in which the variance varies as a power, b, of the mean. Most studies report values of b varying between 1 and 2. However, Cohen (2014a) showed recently that smooth changes in environmental conditions in a model can lead to an abrupt, infinite change in b. To understand what factors can influence the occurrence of an abrupt change in b, we used both mathematical analysis and Monte Carlo samples from a model in which populations of the same species settled on patches, and each population followed independently a stochastic linear birth-and-death process. We investigated how the power relationship responds to a smooth change of population growth rate, under different sampling strategies, initial population density, and population age. We showed analytically that, if the initial populations differ only in density, and samples are taken from all patches after the same time period following a major invasion event, Taylor's law holds with exponent b=1, regardless of the population growth rate. If samples are taken at different times from patches that have the same initial population densities, we calculate an abrupt shift of b, as predicted by Cohen (2014a). The loss of linearity between log variance and log mean is a leading indicator of the abrupt shift. If both initial population densities and population ages vary among patches, estimates of b lie between 1 and 2, as in most empirical studies. But the value of b declines to ~1 as the system approaches a critical point. Our results can inform empirical studies that might be designed to demonstrate an abrupt shift in Taylor's law.

  3. The Correlation Between Porosity, Density and Degree of Serpentinization in Ophiolites from Point Sal, California: Implications for Strength of Oceanic Lithosphere

    NASA Astrophysics Data System (ADS)

    Karrasch, A. K.; Farough, A.; Lowell, R. P.

    2017-12-01

    Hydration and serpentinization of oceanic lithosphere influences its strength and behavior under stress. Serpentine content is the limiting factor in deformation and the correlation between crustal strength and the degree of serpentinization is not linear. Escartin et al., [2001] shows that the presence of only 10% serpentine results in a nominally non-dilatant mode of brittle deformation and reduces the strength of peridotites dramatically. In this study, we measured density and porosity of ophiolite samples from Point Sal, CA that had various degrees of serpentinization. The densities ranged between 2500- 3000 kg/m3 and porosities ranged between 2.1-4.8%. The degree of serpentinization was estimated from mineralogical analysis, and these data were combined with that of 4 other samples analyzed by Farough et al., [2016], which were obtained from various localities. The degree of serpentinization varied between 0.6 and 40%. We found that degree of serpentinization was inversely correlated with density with a slope of 7.25 (kg/m3)/%. Using Horen et al., [1996] models, estimated P-wave velocity of the samples ranged between 6.75-7.90 km/s and S-wave velocity ranged between 3.58-4.35 km/s. There were no distinguishable difference in the results between olivine-rich or pyroxene-rich samples. These results, along with correlations to strength and deformation style, can be used as a reference for mechanical properties of the crust at depth, analysis of deep drill cores and to estimate the rate of weakening of the oceanic crust after the onset of serpentinization reactions.

  4. Distribution and abundance of American eels in the White Oak River estuary, North Carolina

    USGS Publications Warehouse

    Hightower, J.E.; Nesnow, C.

    2006-01-01

    Apparent widespread declines in abundance of Anguilla rostrata (American eel) have reinforced the need for information regarding its life history and status. We used commercial eel pots and crab (peeler) pots to examine the distribution, condition, and abundance of American eels within the White Oak River estuary, NC, during summers of 2002-2003. Catch of American eels per overnight set was 0.35 (SE = 0.045) in 2002 and 0.49 (SE = 0.044) in 2003. There was not a significant linear relationship between catch per set and depth in 2002 (P = 0.31, depth range 0.9-3.4 m) or 2003 (P = 0.18, depth range 0.6-3.4 m). American eels from the White Oak River were in good condition, based on the slope of a length-weight relationship (3.41) compared to the median slope (3.15) from other systems. Estimates of population density from grid sampling in 2003 (300 mm and larger: 4.0-13.8 per ha) were similar to estimates for the Hudson River estuary, but substantially less than estimates from other (smaller) systems including tidal creeks within estuaries. Density estimates from coastal waters can be used with harvest records to examine whether overfishing has contributed to the recent apparent declines in American eel abundance.

  5. Using satellite remote sensing to model and map the distribution of Bicknell's thrush (Catharus bicknelli) in the White Mountains of New Hampshire

    NASA Astrophysics Data System (ADS)

    Hale, Stephen Roy

    Landsat-7 Enhanced Thematic Mapper satellite imagery was used to model Bicknell's Thrush (Catharus bicknelli) distribution in the White Mountains of New Hampshire. The proof-of-concept was established for using satellite imagery in species-habitat modeling, where for the first time imagery spectral features were used to estimate a species-habitat model variable. The model predicted rising probabilities of thrush presence with decreasing dominant vegetation height, increasing elevation, and decreasing distance to nearest Fir Sapling cover type. To solve the model at all locations required regressor estimates at every pixel, which were not available for the dominant vegetation height and elevation variables. Topographically normalized imagery features Normalized Difference Vegetation Index and Band 1 (blue) were used to estimate dominant vegetation height using multiple linear regression; and a Digital Elevation Model was used to estimate elevation. Distance to nearest Fir Sapling cover type was obtained for each pixel from a land cover map specifically constructed for this project. The Bicknell's Thrush habitat model was derived using logistic regression, which produced the probability of detecting a singing male based on the pattern of model covariates. Model validation using Bicknell's Thrush data not used in model calibration, revealed that the model accurately estimated thrush presence at probabilities ranging from 0 to <0.40 and from 0.50 to <0.60. Probabilities from 0.40 to <0.50 and greater than 0.60 significantly underestimated and overestimated presence, respectively. Applying the model to the study area illuminated an important implication for Bicknell's Thrush conservation. The model predicted increasing numbers of presences and increasing relative density with rising elevation, with which exists a concomitant decrease in land area. Greater land area of lower density habitats may account for more total individuals and reproductive output than higher density less abundant land area. Efforts to conserve areas of highest individual density under the assumption that density reflects habitat quality could target the smallest fraction of the total population.

  6. Method development estimating ambient mercury concentration from monitored mercury wet deposition

    NASA Astrophysics Data System (ADS)

    Chen, S. M.; Qiu, X.; Zhang, L.; Yang, F.; Blanchard, P.

    2013-05-01

    Speciated atmospheric mercury data have recently been monitored at multiple locations in North America; but the spatial coverage is far less than the long-established mercury wet deposition network. The present study describes a first attempt linking ambient concentration with wet deposition using Beta distribution fitting of a ratio estimate. The mean, median, mode, standard deviation, and skewness of the fitted Beta distribution parameters were generated using data collected in 2009 at 11 monitoring stations. Comparing the normalized histogram and the fitted density function, the empirical and fitted Beta distribution of the ratio shows a close fit. The estimated ambient mercury concentration was further partitioned into reactive gaseous mercury and particulate bound mercury using linear regression model developed by Amos et al. (2012). The method presented here can be used to roughly estimate mercury ambient concentration at locations and/or times where such measurement is not available but where wet deposition is monitored.

  7. Linear stability analysis of the Vlasov-Poisson equations in high density plasmas in the presence of crossed fields and density gradients

    NASA Technical Reports Server (NTRS)

    Kaup, D. J.; Hansen, P. J.; Choudhury, S. Roy; Thomas, Gary E.

    1986-01-01

    The equations for the single-particle orbits in a nonneutral high density plasma in the presence of inhomogeneous crossed fields are obtained. Using these orbits, the linearized Vlasov equation is solved as an expansion in the orbital radii in the presence of inhomogeneities and density gradients. A model distribution function is introduced whose cold-fluid limit is exactly the same as that used in many previous studies of the cold-fluid equations. This model function is used to reduce the linearized Vlasov-Poisson equations to a second-order ordinary differential equation for the linearized electrostatic potential whose eigenvalue is the perturbation frequency.

  8. Population response to climate change: linear vs. non-linear modeling approaches.

    PubMed

    Ellis, Alicia M; Post, Eric

    2004-03-31

    Research on the ecological consequences of global climate change has elicited a growing interest in the use of time series analysis to investigate population dynamics in a changing climate. Here, we compare linear and non-linear models describing the contribution of climate to the density fluctuations of the population of wolves on Isle Royale, Michigan from 1959 to 1999. The non-linear self excitatory threshold autoregressive (SETAR) model revealed that, due to differences in the strength and nature of density dependence, relatively small and large populations may be differentially affected by future changes in climate. Both linear and non-linear models predict a decrease in the population of wolves with predicted changes in climate. Because specific predictions differed between linear and non-linear models, our study highlights the importance of using non-linear methods that allow the detection of non-linearity in the strength and nature of density dependence. Failure to adopt a non-linear approach to modelling population response to climate change, either exclusively or in addition to linear approaches, may compromise efforts to quantify ecological consequences of future warming.

  9. Radiographic absorptiometry method in measurement of localized alveolar bone density changes.

    PubMed

    Kuhl, E D; Nummikoski, P V

    2000-03-01

    The objective of this study was to measure the accuracy and precision of a radiographic absorptiometry method by using an occlusal density reference wedge in quantification of localized alveolar bone density changes. Twenty-two volunteer subjects had baseline and follow-up radiographs taken of mandibular premolar-molar regions with an occlusal density reference wedge in both films and added bone chips in the baseline films. The absolute bone equivalent densities were calculated in the areas that contained bone chips from the baseline and follow-up radiographs. The differences in densities described the masses of the added bone chips that were then compared with the true masses by using regression analysis. The correlation between the estimated and true bone-chip masses ranged from R = 0.82 to 0.94, depending on the background bone density. There was an average 22% overestimation of the mass of the bone chips when they were in low-density background, and up to 69% overestimation when in high-density background. The precision error of the method, which was calculated from duplicate bone density measurements of non-changing areas in both films, was 4.5%. The accuracy of the intraoral radiographic absorptiometry method is low when used for absolute quantification of bone density. However, the precision of the method is good and the correlation is linear, indicating that the method can be used for serial assessment of bone density changes at individual sites.

  10. GeV-scale hot sterile neutrino oscillations: a numerical solution

    NASA Astrophysics Data System (ADS)

    Ghiglieri, J.; Laine, M.

    2018-02-01

    The scenario of baryogenesis through GeV-scale sterile neutrino oscillations is governed by non-linear differential equations for the time evolution of a sterile neutrino density matrix and Standard Model lepton and baryon asymmetries. By employing up-to-date rate coefficients and a non-perturbatively estimated Chern-Simons diffusion rate, we present a numerical solution of this system, incorporating the full momentum and helicity dependences of the density matrix. The density matrix deviates significantly from kinetic equilibrium, with the IR modes equilibrating much faster than the UV modes. For equivalent input parameters, our final results differ moderately (˜50%) from recent benchmarks in the literature. The possibility of producing an observable baryon asymmetry is nevertheless confirmed. We illustrate the dependence of the baryon asymmetry on the sterile neutrino mass splitting and on the CP-violating phase measurable in active neutrino oscillation experiments.

  11. Field dynamics inference via spectral density estimation

    NASA Astrophysics Data System (ADS)

    Frank, Philipp; Steininger, Theo; Enßlin, Torsten A.

    2017-11-01

    Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.

  12. Field dynamics inference via spectral density estimation.

    PubMed

    Frank, Philipp; Steininger, Theo; Enßlin, Torsten A

    2017-11-01

    Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.

  13. Uncertainty in Estimates of Net Seasonal Snow Accumulation on Glaciers from In Situ Measurements

    NASA Astrophysics Data System (ADS)

    Pulwicki, A.; Flowers, G. E.; Radic, V.

    2017-12-01

    Accurately estimating the net seasonal snow accumulation (or "winter balance") on glaciers is central to assessing glacier health and predicting glacier runoff. However, measuring and modeling snow distribution is inherently difficult in mountainous terrain, resulting in high uncertainties in estimates of winter balance. Our work focuses on uncertainty attribution within the process of converting direct measurements of snow depth and density to estimates of winter balance. We collected more than 9000 direct measurements of snow depth across three glaciers in the St. Elias Mountains, Yukon, Canada in May 2016. Linear regression (LR) and simple kriging (SK), combined with cross correlation and Bayesian model averaging, are used to interpolate estimates of snow water equivalent (SWE) from snow depth and density measurements. Snow distribution patterns are found to differ considerably between glaciers, highlighting strong inter- and intra-basin variability. Elevation is found to be the dominant control of the spatial distribution of SWE, but the relationship varies considerably between glaciers. A simple parameterization of wind redistribution is also a small but statistically significant predictor of SWE. The SWE estimated for one study glacier has a short range parameter (90 m) and both LR and SK estimate a winter balance of 0.6 m w.e. but are poor predictors of SWE at measurement locations. The other two glaciers have longer SWE range parameters ( 450 m) and due to differences in extrapolation, SK estimates are more than 0.1 m w.e. (up to 40%) lower than LR estimates. By using a Monte Carlo method to quantify the effects of various sources of uncertainty, we find that the interpolation of estimated values of SWE is a larger source of uncertainty than the assignment of snow density or than the representation of the SWE value within a terrain model grid cell. For our study glaciers, the total winter balance uncertainty ranges from 0.03 (8%) to 0.15 (54%) m w.e. depending primarily on the interpolation method. Despite the challenges associated with accurately and precisely estimating winter balance, our results are consistent with the previously reported regional accumulation gradient.

  14. Air pollution and survival within the Washington University-EPRI veterans cohort: risks based on modeled estimates of ambient levels of hazardous and criteria air pollutants.

    PubMed

    Lipfert, Frederick W; Wyzga, Ronald E; Baty, Jack D; Miller, J Philip

    2009-04-01

    For this paper, we considered relationships between mortality, vehicular traffic density, and ambient levels of 12 hazardous air pollutants, elemental carbon (EC), oxides of nitrogen (NOx), sulfur dioxide (SO2), and sulfate (SO4(2-)). These pollutant species were selected as markers for specific types of emission sources, including vehicular traffic, coal combustion, smelters, and metal-working industries. Pollutant exposures were estimated using emissions inventories and atmospheric dispersion models. We analyzed associations between county ambient levels of these pollutants and survival patterns among approximately 70,000 U.S. male veterans by mortality period (1976-2001 and subsets), type of exposure model, and traffic density level. We found significant associations between all-cause mortality and traffic-related air quality indicators and with traffic density per se, with stronger associations for benzene, formaldehyde, diesel particulate, NOx, and EC. The maximum effect on mortality for all cohort subjects during the 26-yr follow-up period is approximately 10%, but most of the pollution-related deaths in this cohort occurred in the higher-traffic counties, where excess risks approach 20%. However, mortality associations with diesel particulates are similar in high- and low-traffic counties. Sensitivity analyses show risks decreasing slightly over time and minor differences between linear and logarithmic exposure models. Two-pollutant models show stronger risks associated with specific traffic-related pollutants than with traffic density per se, although traffic density retains statistical significance in most cases. We conclude that tailpipe emissions of both gases and particles are among the most significant and robust predictors of mortality in this cohort and that most of those associations have weakened over time. However, we have not evaluated possible contributions from road dust or traffic noise. Stratification by traffic density level suggests the presence of response thresholds, especially for gaseous pollutants. Because of their wider distributions of estimated exposures, risk estimates based on emissions and atmospheric dispersion models tend to be more precise than those based on local ambient measurements.

  15. Kramers-Kronig relations in Laser Intensity Modulation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuncer, Enis

    2006-01-01

    In this short paper, the Kramers-Kronig relations for the Laser Intensity Modulation Method (LIMM) are presented to check the self-consistency of experimentally obtained complex current densities. The numerical procedure yields well defined, precise estimates for the real and the imaginary parts of the LIMM current density calculated from its imaginary and real parts, respectively. The procedure also determines an accurate high frequency real current value which appears to be an intrinsic material parameter similar to that of the dielectric permittivity at optical frequencies. Note that the problem considered here couples two different material properties, thermal and electrical, consequently the validitymore » of the Kramers-Kronig relation indicates that the problem is invariant and linear.« less

  16. Plasma stability analysis using Consistent Automatic Kinetic Equilibrium reconstruction (CAKE)

    NASA Astrophysics Data System (ADS)

    Roelofs, Matthijs; Kolemen, Egemen; Eldon, David; Glasser, Alex; Meneghini, Orso; Smith, Sterling P.

    2017-10-01

    Presented here is the Consistent Automatic Kinetic Equilibrium (CAKE) code. CAKE is being developed to perform real-time kinetic equilibrium reconstruction, aiming to do a reconstruction in less than 100ms. This is achieved by taking, next to real-time Motional Stark Effect (MSE) and magnetics data, real-time Thomson Scattering (TS) and real-time Charge Exchange Recombination (CER, still in development) data in to account. Electron densities and temperature are determined by TS, while ion density and pressures are determined using CER. These form, together with the temperature and density of neutrals, the additional pressure constraints. Extra current constraints are imposed in the core by the MSE diagnostics. The pedestal current density is estimated using Sauters equation for the bootstrap current density. By comparing the behaviour of the ideal MHD perturbed potential energy (δW) and the linear stability index (Δ') of CAKE to magnetics-only reconstruction, it can be seen that the use of diagnostics to reconstruct the pedestal have a large effect on stability. Supported by U.S. DOE DE-SC0015878 and DE-FC02-04ER54698.

  17. Do we need to measure total serum IgA to exclude IgA deficiency in coeliac disease?

    PubMed Central

    Sinclair, D; Saas, M; Turk, A; Goble, M; Kerr, D

    2006-01-01

    Background Screening for IgA deficiency in patients with coeliac disease is essential because of the increased incidence of IgA deficiency associated with the disease, which usually relies on the estimation of IgA levels in each case. Aim To devise a method of excluding IgA deficiency without measuring total serum IgA in each case. Materials and methods The optical density readings on enzyme‐linked immunosorbent assay (ELISA) of 608 routine samples received for tissue transglutaminase (TTG) antibody testing for coeliac disease were compared with their total IgA concentrations. Dilution experiments were also carried out to ensure linear relationships between optical density on ELISA and IgA concentrations and to compare the sensitivities for TTG and endomysium antibodies in TTG‐positive samples. Results and discussion A clear relationship was shown between total IgA concentration and TTG optical density readings by ELISA. To ensure a positive TTG result if antibodies are present, it was possible to recommend an optical density level above which all samples have sufficient IgA. Samples with optical density <0.05 should be investigated further by estimating total IgA and, if low, samples should be subjected to immunofluorescence microscopy testing for IgA and IgG endomysium antibodies. Conclusions An easier, more cost‐effective and practical way of excluding IgA deficiency in the investigation on coeliac disease is reported. PMID:16489174

  18. The Seismic Tool-Kit (STK): An Open Source Software For Learning the Basis of Signal Processing and Seismology.

    NASA Astrophysics Data System (ADS)

    Reymond, D.

    2016-12-01

    We present an open source software project (GNU public license), named STK: Seismic Tool-Kit, that is dedicated mainly for learning signal processing and seismology. The STK project that started in 2007, is hosted by SourceForge.net, and count more than 20000 downloads at the date of writing.The STK project is composed of two main branches:First, a graphical interface dedicated to signal processing (in the SAC format (SAC_ASCII and SAC_BIN): where the signal can be plotted, zoomed, filtered, integrated, derivated, ... etc. (a large variety of IFR and FIR filter is proposed). The passage in the frequency domain via the Fourier transform is used to introduce the estimation of spectral density of the signal , with visualization of the Power Spectral Density (PSD) in linear or log scale, and also the evolutive time-frequency representation (or sonagram). The 3-components signals can be also processed for estimating their polarization properties, either for a given window, or either for evolutive windows along the time. This polarization analysis is useful for extracting the polarized noises, differentiating P waves, Rayleigh waves, Love waves, ... etc. Secondly, a panel of Utilities-Program are proposed for working in a terminal mode, with basic programs for computing azimuth and distance in spherical geometry, inter/auto-correlation, spectral density, time-frequency for an entire directory of signals, focal planes, and main components axis, radiation pattern of P waves, Polarization analysis of different waves (including noise), under/over-sampling the signals, cubic-spline smoothing, and linear/non linear regression analysis of data set. STK is developed in C/C++, mainly under Linux OS, and it has been also partially implemented under MS-Windows. STK has been used in some schools for viewing and plotting seismic records provided by IRIS, and it has been used as a practical support for teaching the basis of signal processing. Useful links:http://sourceforge.net/projects/seismic-toolkit/http://sourceforge.net/p/seismic-toolkit/wiki/browse_pages/

  19. Estimating the effect of multiple environmental stressors on coral bleaching and mortality.

    PubMed

    Welle, Paul D; Small, Mitchell J; Doney, Scott C; Azevedo, Inês L

    2017-01-01

    Coral cover has been declining in recent decades due to increased temperatures and environmental stressors. However, the extent to which different stressors contribute both individually and in concert to bleaching and mortality is still very uncertain. We develop and use a novel regression approach, using non-linear parametric models that control for unobserved time invariant effects to estimate the effects on coral bleaching and mortality due to temperature, solar radiation, depth, hurricanes and anthropogenic stressors using historical data from a large bleaching event in 2005 across the Caribbean. Two separate models are created, one to predict coral bleaching, and the other to predict near-term mortality. A large ensemble of supporting data is assembled to control for omitted variable bias and improve fit, and a significant improvement in fit is observed from univariate linear regression based on temperature alone. The results suggest that climate stressors (temperature and radiation) far outweighed direct anthropogenic stressors (using distance from shore and nearby human population density as a proxy for such stressors) in driving coral health outcomes during the 2005 event. Indeed, temperature was found to play a role ~4 times greater in both the bleaching and mortality response than population density across their observed ranges. The empirical models tested in this study have large advantages over ordinary-least squares-they offer unbiased estimates for censored data, correct for spatial correlation, and are capable of handling more complex relationships between dependent and independent variables. The models offer a framework for preparing for future warming events and climate change; guiding monitoring and attribution of other bleaching and mortality events regionally and around the globe; and informing adaptive management and conservation efforts.

  20. Estimating the effect of multiple environmental stressors on coral bleaching and mortality

    PubMed Central

    Welle, Paul D.; Small, Mitchell J.; Doney, Scott C.; Azevedo, Inês L.

    2017-01-01

    Coral cover has been declining in recent decades due to increased temperatures and environmental stressors. However, the extent to which different stressors contribute both individually and in concert to bleaching and mortality is still very uncertain. We develop and use a novel regression approach, using non-linear parametric models that control for unobserved time invariant effects to estimate the effects on coral bleaching and mortality due to temperature, solar radiation, depth, hurricanes and anthropogenic stressors using historical data from a large bleaching event in 2005 across the Caribbean. Two separate models are created, one to predict coral bleaching, and the other to predict near-term mortality. A large ensemble of supporting data is assembled to control for omitted variable bias and improve fit, and a significant improvement in fit is observed from univariate linear regression based on temperature alone. The results suggest that climate stressors (temperature and radiation) far outweighed direct anthropogenic stressors (using distance from shore and nearby human population density as a proxy for such stressors) in driving coral health outcomes during the 2005 event. Indeed, temperature was found to play a role ~4 times greater in both the bleaching and mortality response than population density across their observed ranges. The empirical models tested in this study have large advantages over ordinary-least squares–they offer unbiased estimates for censored data, correct for spatial correlation, and are capable of handling more complex relationships between dependent and independent variables. The models offer a framework for preparing for future warming events and climate change; guiding monitoring and attribution of other bleaching and mortality events regionally and around the globe; and informing adaptive management and conservation efforts. PMID:28472031

  1. Comparison of breast percent density estimation from raw versus processed digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina

    2011-03-01

    We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.

  2. Superfluidity in Strongly Interacting Fermi Systems with Applications to Neutron Stars

    NASA Astrophysics Data System (ADS)

    Khodel, Vladimir

    The rotational dynamics and cooling history of neutron stars is influenced by the superfluid properties of nucleonic matter. In this thesis a novel separation technique is applied to the analysis of the gap equation for neutron matter. It is shown that the problem can be recast into two tasks: solving a simple system of linear integral equations for the shape functions of various components of the gap function and solving a system of non-linear algebraic equations for their scale factors. Important simplifications result from the fact that the ratio of the gap amplitude to the Fermi energy provides a small parameter in this problem. The relationship between the analytic structure of the shape functions and the density interval for the existence of superfluid gap is discussed. It is shown that in 1S0 channel the position of the first zero of the shape function gives an estimate of the upper critical density. The relation between the resonant behavior of the two-neutron interaction in this channel and the density dependence of the gap is established. The behavior of the gap in the limits of low and high densities is analyzed. Various approaches to calculation of the scale factors are considered: model cases, angular averaging, and perturbation theory. An optimization-based approach is proposed. The shape functions and scale factors for Argonne υ14 and υ18 potentials are determined in singlet and triplet channels. Dependence of the solution on the value of effective mass and medium polarization is studied.

  3. Estimating Ω from Galaxy Redshifts: Linear Flow Distortions and Nonlinear Clustering

    NASA Astrophysics Data System (ADS)

    Bromley, B. C.; Warren, M. S.; Zurek, W. H.

    1997-02-01

    We propose a method to determine the cosmic mass density Ω from redshift-space distortions induced by large-scale flows in the presence of nonlinear clustering. Nonlinear structures in redshift space, such as fingers of God, can contaminate distortions from linear flows on scales as large as several times the small-scale pairwise velocity dispersion σv. Following Peacock & Dodds, we work in the Fourier domain and propose a model to describe the anisotropy in the redshift-space power spectrum; tests with high-resolution numerical data demonstrate that the model is robust for both mass and biased galaxy halos on translinear scales and above. On the basis of this model, we propose an estimator of the linear growth parameter β = Ω0.6/b, where b measures bias, derived from sampling functions that are tuned to eliminate distortions from nonlinear clustering. The measure is tested on the numerical data and found to recover the true value of β to within ~10%. An analysis of IRAS 1.2 Jy galaxies yields β=0.8+0.4-0.3 at a scale of 1000 km s-1, which is close to optimal given the shot noise and finite size of the survey. This measurement is consistent with dynamical estimates of β derived from both real-space and redshift-space information. The importance of the method presented here is that nonlinear clustering effects are removed to enable linear correlation anisotropy measurements on scales approaching the translinear regime. We discuss implications for analyses of forthcoming optical redshift surveys in which the dispersion is more than a factor of 2 greater than in the IRAS data.

  4. Physical Quality Indicators and Mechanical Behavior of Agricultural Soils of Argentina.

    PubMed

    Imhoff, Silvia; da Silva, Alvaro Pires; Ghiberto, Pablo J; Tormena, Cássio A; Pilatti, Miguel A; Libardi, Paulo L

    2016-01-01

    Mollisols of Santa Fe have different tilth and load support capacity. Despite the importance of these attributes to achieve a sustainable crop production, few information is available. The objectives of this study are i) to assess soil physical indicators related to plant growth and to soil mechanical behavior; and ii) to establish relationships to estimate the impact of soil loading on the soil quality to plant growth. The study was carried out on Argiudolls and Hapludolls of Santa Fe. Soil samples were collected to determine texture, organic matter content, bulk density, water retention curve, soil resistance to penetration, least limiting water range, critical bulk density for plant growth, compression index, pre-consolidation pressure and soil compressibility. Water retention curve and soil resistance to penetration were linearly and significantly related to clay and organic matter (R2 = 0.91 and R2 = 0.84). The pedotransfer functions of water retention curve and soil resistance to penetration allowed the estimation of the least limiting water range and critical bulk density for plant growth. A significant nonlinear relationship was found between critical bulk density for plant growth and clay content (R2 = 0.98). Compression index was significantly related to bulk density, water content, organic matter and clay plus silt content (R2 = 0.77). Pre-consolidation pressure was significantly related to organic matter, clay and water content (R2 = 0.77). Soil compressibility was significantly related to initial soil bulk density, clay and water content. A nonlinear and significantly pedotransfer function (R2 = 0.88) was developed to predict the maximum acceptable pressure to be applied during tillage operations by introducing critical bulk density for plant growth in the compression model. The developed pedotransfer function provides a useful tool to link the mechanical behavior and tilth of the soils studied.

  5. Physical Quality Indicators and Mechanical Behavior of Agricultural Soils of Argentina

    PubMed Central

    Pires da Silva, Alvaro; Ghiberto, Pablo J.; Tormena, Cássio A.; Pilatti, Miguel A.; Libardi, Paulo L.

    2016-01-01

    Mollisols of Santa Fe have different tilth and load support capacity. Despite the importance of these attributes to achieve a sustainable crop production, few information is available. The objectives of this study are i) to assess soil physical indicators related to plant growth and to soil mechanical behavior; and ii) to establish relationships to estimate the impact of soil loading on the soil quality to plant growth. The study was carried out on Argiudolls and Hapludolls of Santa Fe. Soil samples were collected to determine texture, organic matter content, bulk density, water retention curve, soil resistance to penetration, least limiting water range, critical bulk density for plant growth, compression index, pre-consolidation pressure and soil compressibility. Water retention curve and soil resistance to penetration were linearly and significantly related to clay and organic matter (R2 = 0.91 and R2 = 0.84). The pedotransfer functions of water retention curve and soil resistance to penetration allowed the estimation of the least limiting water range and critical bulk density for plant growth. A significant nonlinear relationship was found between critical bulk density for plant growth and clay content (R2 = 0.98). Compression index was significantly related to bulk density, water content, organic matter and clay plus silt content (R2 = 0.77). Pre-consolidation pressure was significantly related to organic matter, clay and water content (R2 = 0.77). Soil compressibility was significantly related to initial soil bulk density, clay and water content. A nonlinear and significantly pedotransfer function (R2 = 0.88) was developed to predict the maximum acceptable pressure to be applied during tillage operations by introducing critical bulk density for plant growth in the compression model. The developed pedotransfer function provides a useful tool to link the mechanical behavior and tilth of the soils studied. PMID:27099925

  6. Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs

    NASA Astrophysics Data System (ADS)

    Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.

    2013-07-01

    Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.

  7. Simplified, rapid, and inexpensive estimation of water primary productivity based on chlorophyll fluorescence parameter Fo.

    PubMed

    Chen, Hui; Zhou, Wei; Chen, Weixian; Xie, Wei; Jiang, Liping; Liang, Qinlang; Huang, Mingjun; Wu, Zongwen; Wang, Qiang

    2017-04-01

    Primary productivity in water environment relies on the photosynthetic production of microalgae. Chlorophyll fluorescence is widely used to detect the growth status and photosynthetic efficiency of microalgae. In this study, a method was established to determine the Chl a content, cell density of microalgae, and water primary productivity by measuring chlorophyll fluorescence parameter Fo. A significant linear relationship between chlorophyll fluorescence parameter Fo and Chl a content of microalgae, as well as between Fo and cell density, was observed under pure-culture conditions. Furthermore, water samples collected from natural aquaculture ponds were used to validate the correlation between Fo and water primary productivity, which is closely related to Chl a content in water. Thus, for a given pure culture of microalgae or phytoplankton (mainly microalgae) in aquaculture ponds or other natural ponds for which the relationship between the Fo value and Chl a content or cell density could be established, Chl a content or cell density could be determined by measuring the Fo value, thereby making it possible to calculate the water primary productivity. It is believed that this method can provide a convenient way of efficiently estimating the primary productivity in natural aquaculture ponds and bringing economic value in limnetic ecology assessment, as well as in algal bloom monitoring. Copyright © 2017 Elsevier GmbH. All rights reserved.

  8. On the design of paleoenvironmental data networks for estimating large-scale patterns of climate

    NASA Astrophysics Data System (ADS)

    Kutzbach, J. E.; Guetter, P. J.

    1980-09-01

    Guidelines are determined for the spatial density and location of climatic variables (temperature and precipitation) that are appropriate for estimating the continental- to hemispheric-scale pattern of atmospheric circulation (sea-level pressure). Because instrumental records of temperature and precipitation simulate the climatic information that is contained in certain paleoenvironmental records (tree-ring, pollen, and written-documentary records, for example), these guidelines provide useful sampling strategies for reconstructing the pattern of atmospheric circulation from paleoenvironmental records. The statistical analysis uses a multiple linear regression model. The sampling strategies consist of changes in site density (from 0.5 to 2.5 sites per million square kilometers) and site location (from western North American sites only to sites in Japan, North America, and western Europe) of the climatic data. The results showed that the accuracy of specification of the pattern of sea-level pressure: (1) is improved if sites with climatic records are spread as uniformly as possible over the area of interest; (2) increases with increasing site density-at least up to the maximum site density used in this study; (3) is improved if sites cover an area that extends considerably beyond the limits of the area of interest. The accuracy of specification was lower for independent data than for the data that were used to develop the regression model; some skill was found for almost all sampling strategies.

  9. Relationship between field-aligned currents and inverted-V parallel potential drops observed at midaltitudes

    NASA Astrophysics Data System (ADS)

    Sakanoi, T.; Fukunishi, H.; Mukai, T.

    1995-10-01

    The inverted-V field-aligned acceleration region existing in the altitude range of several thousand kilometers plays an essential role for the magnetosphere-ionosphere coupling system. The adiabatic plasma theory predicts a linear relationship between field-aligned current density (J∥) and parallel potential drop (Φ∥), that is, J∥=KΦ∥, where K is the field-aligned conductance. We examined this relationship using the charged particle and magnetic field data obtained from the Akebono (Exos D) satellite. The potential drop above the satellite was derived from the peak energy of downward electrons, while the potential drop below the satellite was derived from two different methods: the peak energy of upward ions and the energy-dependent widening of electron loss cone. On the other hand, field-aligned current densities in the inverted-V region were estimated from the Akebono magnetometer data. Using these potential drops and field-aligned current densities, we estimated the linear field-aligned conductance KJΦ. Further, we obtained the corrected field-aligned conductance KCJΦ by applying the full Knight's formula to the current-voltage relationship. We also independently estimated the field-aligned conductance KTN from the number density and the thermal temperature of magnetospheric source electrons which were obtained by fitting accelerated Maxwellian functions for precipitating electrons. The results are summarized as follows: (1) The latitudinal dependence of parallel potential drops is characterized by a narrow V-shaped structure with a width of 0.4°-1.0°. (2) Although the inverted-V potential region exactly corresponds to the upward field aligned current region, the latitudinal dependence of upward current intensity is an inverted-U shape rather than an inverted-V shape. Thus it is suggested that the field-aligned conductance KCJΦ changes with a V-shaped latitudinal dependence. In many cases, KCJΦ values at the edge of the inverted-V region are about 5-10 times larger than those at the center. (3) By comparing KCJΦ with KTN, KCJΦ is found to be about 2-20 times larger than KTN. These results suggest that low-energy electrons such as trapped electrons, secondary and back-scattered electrons, and ionospheric electrons significantly contribute to upward field-aligned currents in the inverted-V region. It is therefore inferred that non adiabatic pitch angle scattering processes play an important role in the inverted-V region. .

  10. Data Series Subtraction with Unknown and Unmodeled Background Noise

    NASA Technical Reports Server (NTRS)

    Vitale, Stefano; Congedo, Giuseppe; Dolesi, Rita; Ferroni, Valerio; Hueller, Mauro; Vetrugno, Daniele; Weber, William Joseph; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; hide

    2014-01-01

    LISA Pathfinder (LPF), the precursor mission to a gravitational wave observatory of the European Space Agency, will measure the degree to which two test masses can be put into free fall, aiming to demonstrate a suppression of disturbance forces corresponding to a residual relative acceleration with a power spectral density (PSD) below (30 fm/sq s/Hz)(sup 2) around 1 mHz. In LPF data analysis, the disturbance forces are obtained as the difference between the acceleration data and a linear combination of other measured data series. In many circumstances, the coefficients for this linear combination are obtained by fitting these data series to the acceleration, and the disturbance forces appear then as the data series of the residuals of the fit. Thus the background noise or, more precisely, its PSD, whose knowledge is needed to build up the likelihood function in ordinary maximum likelihood fitting, is here unknown, and its estimate constitutes instead one of the goals of the fit. In this paper we present a fitting method that does not require the knowledge of the PSD of the background noise. The method is based on the analytical marginalization of the posterior parameter probability density with respect to the background noise PSD, and returns an estimate both for the fitting parameters and for the PSD. We show that both these estimates are unbiased, and that, when using averaged Welchs periodograms for the residuals, the estimate of the PSD is consistent, as its error tends to zero with the inverse square root of the number of averaged periodograms. Additionally, we find that the method is equivalent to some implementations of iteratively reweighted least-squares fitting. We have tested the method both on simulated data of known PSD and on data from several experiments performed with the LISA Pathfinder end-to-end mission simulator.

  11. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  12. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  13. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  14. Development of property-transfer models for estimating the hydraulic properties of deep sediments at the Idaho National Engineering and Environmental Laboratory, Idaho

    USGS Publications Warehouse

    Winfield, Kari A.

    2005-01-01

    Because characterizing the unsaturated hydraulic properties of sediments over large areas or depths is costly and time consuming, development of models that predict these properties from more easily measured bulk-physical properties is desirable. At the Idaho National Engineering and Environmental Laboratory, the unsaturated zone is composed of thick basalt flow sequences interbedded with thinner sedimentary layers. Determining the unsaturated hydraulic properties of sedimentary layers is one step in understanding water flow and solute transport processes through this complex unsaturated system. Multiple linear regression was used to construct simple property-transfer models for estimating the water-retention curve and saturated hydraulic conductivity of deep sediments at the Idaho National Engineering and Environmental Laboratory. The regression models were developed from 109 core sample subsets with laboratory measurements of hydraulic and bulk-physical properties. The core samples were collected at depths of 9 to 175 meters at two facilities within the southwestern portion of the Idaho National Engineering and Environmental Laboratory-the Radioactive Waste Management Complex, and the Vadose Zone Research Park southwest of the Idaho Nuclear Technology and Engineering Center. Four regression models were developed using bulk-physical property measurements (bulk density, particle density, and particle size) as the potential explanatory variables. Three representations of the particle-size distribution were compared: (1) textural-class percentages (gravel, sand, silt, and clay), (2) geometric statistics (mean and standard deviation), and (3) graphical statistics (median and uniformity coefficient). The four response variables, estimated from linear combinations of the bulk-physical properties, included saturated hydraulic conductivity and three parameters that define the water-retention curve. For each core sample,values of each water-retention parameter were estimated from the appropriate regression equation and used to calculate an estimated water-retention curve. The degree to which the estimated curve approximated the measured curve was quantified using a goodness-of-fit indicator, the root-mean-square error. Comparison of the root-mean-square-error distributions for each alternative particle-size model showed that the estimated water-retention curves were insensitive to the way the particle-size distribution was represented. Bulk density, the median particle diameter, and the uniformity coefficient were chosen as input parameters for the final models. The property-transfer models developed in this study allow easy determination of hydraulic properties without need for their direct measurement. Additionally, the models provide the basis for development of theoretical models that rely on physical relationships between the pore-size distribution and the bulk-physical properties of the media. With this adaptation, the property-transfer models should have greater application throughout the Idaho National Engineering and Environmental Laboratory and other geographic locations.

  15. The changing contribution of top-down and bottom-up limitation of mesopredators during 220 years of land use and climate change.

    PubMed

    Pasanen-Mortensen, Marianne; Elmhagen, Bodil; Lindén, Harto; Bergström, Roger; Wallgren, Märtha; van der Velde, Ype; Cousins, Sara A O

    2017-05-01

    Apex predators may buffer bottom-up driven ecosystem change, as top-down suppression may dampen herbivore and mesopredator responses to increased resource availability. However, theory suggests that for this buffering capacity to be realized, the equilibrium abundance of apex predators must increase. This raises the question: will apex predators maintain herbivore/mesopredator limitation, if bottom-up change relaxes resource constraints? Here, we explore changes in mesopredator (red fox Vulpes vulpes) abundance over 220 years in response to eradication and recovery of an apex predator (Eurasian lynx Lynx lynx), and changes in land use and climate which are linked to resource availability. A three-step approach was used. First, recent data from Finland and Sweden were modelled to estimate linear effects of lynx density, land use and winter temperature on fox density. Second, lynx density, land use and winter temperature was estimated in a 22 650 km 2 focal area in boreal and boreo-nemoral Sweden in the years 1830, 1920, 2010 and 2050. Third, the models and estimates were used to project historic and future fox densities in the focal area. Projected fox density was lowest in 1830 when lynx density was high, winters cold and the proportion of cropland low. Fox density peaked in 1920 due to lynx eradication, a mesopredator release boosted by favourable bottom-up changes - milder winters and cropland expansion. By 2010, lynx recolonization had reduced fox density, but it remained higher than in 1830, partly due to the bottom-up changes. Comparing 1830 to 2010, the contribution of top-down limitation decreased, while environment enrichment relaxed bottom-up limitation. Future scenarios indicated that by 2050, lynx density would have to increase by 79% to compensate for a projected climate-driven increase in fox density. We highlight that although top-down limitation in theory can buffer bottom-up change, this requires compensatory changes in apex predator abundance. Hence apex predator recolonization/recovery to historical levels would not be sufficient to compensate for widespread changes in climate and land use, which have relaxed the resource constraints for many herbivores and mesopredators. Variation in bottom-up conditions may also contribute to context dependence in apex predator effects. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.

  16. Statistical Tests of System Linearity Based on the Method of Surrogate Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, N.; Paez, T.; Red-Horse, J.

    When dealing with measured data from dynamic systems we often make the tacit assumption that the data are generated by linear dynamics. While some systematic tests for linearity and determinism are available - for example the coherence fimction, the probability density fimction, and the bispectrum - fi,u-ther tests that quanti$ the existence and the degree of nonlinearity are clearly needed. In this paper we demonstrate a statistical test for the nonlinearity exhibited by a dynamic system excited by Gaussian random noise. We perform the usual division of the input and response time series data into blocks as required by themore » Welch method of spectrum estimation and search for significant relationships between a given input fkequency and response at harmonics of the selected input frequency. We argue that systematic tests based on the recently developed statistical method of surrogate data readily detect significant nonlinear relationships. The paper elucidates the method of surrogate data. Typical results are illustrated for a linear single degree-of-freedom system and for a system with polynomial stiffness nonlinearity.« less

  17. True orbit simulation of piecewise linear and linear fractional maps of arbitrary dimension using algebraic numbers

    NASA Astrophysics Data System (ADS)

    Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji

    2015-06-01

    We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.

  18. Linear-scaling method for calculating nuclear magnetic resonance chemical shifts using gauge-including atomic orbitals within Hartree-Fock and density-functional theory.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-08-07

    Details of a new density matrix-based formulation for calculating nuclear magnetic resonance chemical shifts at both Hartree-Fock and density functional theory levels are presented. For systems with a nonvanishing highest occupied molecular orbital-lowest unoccupied molecular orbital gap, the method allows us to reduce the asymptotic scaling order of the computational effort from cubic to linear, so that molecular systems with 1000 and more atoms can be tackled with today's computers. The key feature is a reformulation of the coupled-perturbed self-consistent field (CPSCF) theory in terms of the one-particle density matrix (D-CPSCF), which avoids entirely the use of canonical MOs. By means of a direct solution for the required perturbed density matrices and the adaptation of linear-scaling integral contraction schemes, the overall scaling of the computational effort is reduced to linear. A particular focus of our formulation is to ensure numerical stability when sparse-algebra routines are used to obtain an overall linear-scaling behavior.

  19. 3D non-linear inversion of magnetic anomalies caused by prismatic bodies using differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Balkaya, Çağlayan; Ekinci, Yunus Levent; Göktürkler, Gökhan; Turan, Seçil

    2017-01-01

    3D non-linear inversion of total field magnetic anomalies caused by vertical-sided prismatic bodies has been achieved by differential evolution (DE), which is one of the population-based evolutionary algorithms. We have demonstrated the efficiency of the algorithm on both synthetic and field magnetic anomalies by estimating horizontal distances from the origin in both north and east directions, depths to the top and bottom of the bodies, inclination and declination angles of the magnetization, and intensity of magnetization of the causative bodies. In the synthetic anomaly case, we have considered both noise-free and noisy data sets due to two vertical-sided prismatic bodies in a non-magnetic medium. For the field case, airborne magnetic anomalies originated from intrusive granitoids at the eastern part of the Biga Peninsula (NW Turkey) which is composed of various kinds of sedimentary, metamorphic and igneous rocks, have been inverted and interpreted. Since the granitoids are the outcropped rocks in the field, the estimations for the top depths of two prisms representing the magnetic bodies were excluded during inversion studies. Estimated bottom depths are in good agreement with the ones obtained by a different approach based on 3D modelling of pseudogravity anomalies. Accuracy of the estimated parameters from both cases has been also investigated via probability density functions. Based on the tests in the present study, it can be concluded that DE is a useful tool for the parameter estimation of source bodies using magnetic anomalies.

  20. Application of dielectric spectroscopy for monitoring high cell density in monoclonal antibody producing CHO cell cultivations.

    PubMed

    Párta, László; Zalai, Dénes; Borbély, Sándor; Putics, Akos

    2014-02-01

    The application of dielectric spectroscopy was frequently investigated as an on-line cell culture monitoring tool; however, it still requires supportive data and experience in order to become a robust technique. In this study, dielectric spectroscopy was used to predict viable cell density (VCD) at industrially relevant high levels in concentrated fed-batch culture of Chinese hamster ovary cells producing a monoclonal antibody for pharmaceutical purposes. For on-line dielectric spectroscopy measurements, capacitance was scanned within a wide range of frequency values (100-19,490 kHz) in six parallel cell cultivation batches. Prior to detailed mathematical analysis of the collected data, principal component analysis (PCA) was applied to compare dielectric behavior of the cultivations. PCA analysis resulted in detecting measurement disturbances. By using the measured spectroscopic data, partial least squares regression (PLS), Cole-Cole, and linear modeling were applied and compared in order to predict VCD. The Cole-Cole and the PLS model provided reliable prediction over the entire cultivation including both the early and decline phases of cell growth, while the linear model failed to estimate VCD in the later, declining cultivation phase. In regards to the measurement error sensitivity, remarkable differences were shown among PLS, Cole-Cole, and linear modeling. VCD prediction accuracy could be improved in the runs with measurement disturbances by first derivative pre-treatment in PLS and by parameter optimization of the Cole-Cole modeling.

  1. Stream Communities Along a Catchment Land-Use Gradient: Subsidy-Stress Responses to Pastoral Development

    NASA Astrophysics Data System (ADS)

    Niyogi, Dev K.; Koren, Mark; Arbuckle, Chris J.; Townsend, Colin R.

    2007-02-01

    When native grassland catchments are converted to pasture, the main effects on stream physicochemistry are usually related to increased nutrient concentrations and fine-sediment input. We predicted that increasing nutrient concentrations would produce a subsidy-stress response (where several ecological metrics first increase and then decrease at higher concentrations) and that increasing sediment cover of the streambed would produce a linear decline in stream health. We predicted that the net effect of agricultural development, estimated as percentage pastoral land cover, would have a nonlinear subsidy-stress or threshold pattern. In our suite of 21 New Zealand streams, epilithic algal biomass and invertebrate density and biomass were higher in catchments with a higher proportion of pastoral land cover, responding mainly to increased nutrient concentration. Invertebrate species richness had a linear, negative relationship with fine-sediment cover but was unrelated to nutrients or pastoral land cover. In accord with our predictions, several invertebrate stream health metrics (Ephemeroptera-Plecoptera-Trichoptera density and richness, New Zealand Macroinvertebrate Community Index, and percent abundance of noninsect taxa) had nonlinear relationships with pastoral land cover and nutrients. Most invertebrate health metrics usually had linear negative relationships with fine-sediment cover. In this region, stream health, as indicated by macroinvertebrates, primarily followed a subsidy-stress pattern with increasing pastoral development; management of these streams should focus on limiting development beyond the point where negative effects are seen.

  2. A novel methodology for non-linear system identification of battery cells used in non-road hybrid electric vehicles

    NASA Astrophysics Data System (ADS)

    Unger, Johannes; Hametner, Christoph; Jakubek, Stefan; Quasthoff, Marcus

    2014-12-01

    An accurate state of charge (SoC) estimation of a traction battery in hybrid electric non-road vehicles, which possess higher dynamics and power densities than on-road vehicles, requires a precise battery cell terminal voltage model. This paper presents a novel methodology for non-linear system identification of battery cells to obtain precise battery models. The methodology comprises the architecture of local model networks (LMN) and optimal model based design of experiments (DoE). Three main novelties are proposed: 1) Optimal model based DoE, which aims to high dynamically excite the battery cells at load ranges frequently used in operation. 2) The integration of corresponding inputs in the LMN to regard the non-linearities SoC, relaxation, hysteresis as well as temperature effects. 3) Enhancements to the local linear model tree (LOLIMOT) construction algorithm, to achieve a physical appropriate interpretation of the LMN. The framework is applicable for different battery cell chemistries and different temperatures, and is real time capable, which is shown on an industrial PC. The accuracy of the obtained non-linear battery model is demonstrated on cells with different chemistries and temperatures. The results show significant improvement due to optimal experiment design and integration of the battery non-linearities within the LMN structure.

  3. Correlation Between Bone Density and Instantaneous Torque at Implant Site Preparation: A Validation on Polyurethane Foam Blocks of a Device Assessing Density of Jawbones.

    PubMed

    Di Stefano, Danilo Alessio; Arosio, Paolo

    2016-01-01

    Bone density at implant placement sites is one of the key factors affecting implant primary stability, which is a determinant for implant osseointegration and rehabilitation success. Site-specific bone density assessment is, therefore, of paramount importance. Recently, an implant micromotor endowed with an instantaneous torque-measuring system has been introduced. The aim of this study was to assess the reliability of this system. Five blocks with different densities (0.16, 0.26, 0.33, 0.49, and 0.65 g/cm(3)) were used. A single trained operator measured the density of one of them (0.33 g/cm(3)), by means of five different devices (20 measurements/device). The five resulting datasets were analyzed through the analysis of variance (ANOVA) model to investigate interdevice variability. As differences were not significant (P = .41), the five devices were each assigned to a different operator, who collected 20 density measurements for each block, both under irrigation (I) and without irrigation (NI). Measurements were pooled and averaged for each block, and their correlation with the actual block-density values was investigated using linear regression analysis. The possible effect of irrigation on density measurement was additionally assessed. Different devices provided reproducible, homogenous results. No significant interoperator variability was observed. Within the physiologic range of densities (> 0.30 g/cm(3)), the linear regression analysis showed a significant linear correlation between the mean torque measurements and the actual bone densities under both drilling conditions (r = 0.990 [I], r = 0.999 [NI]). Calibration lines were drawn under both conditions. Values collected under irrigation were lower than those collected without irrigation at all densities. The NI/I mean torque ratio was shown to decrease linearly with density (r = 0.998). The mean error introduced by the device-operator system was less than 10% in the range of normal jawbone density. Measurements performed with the device were linearly correlated with the blocks' bone densities. The results validate the device as an objective intraoperative tool for bone-density assessment that may contribute to proper jawbone-density evaluation and implant-insertion planning.

  4. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  5. Electrostatic interaction between stereocilia: I. Its role in supporting the structure of the hair bundle.

    PubMed

    Dolgobrodov, S G; Lukashkin, A N; Russell, I J

    2000-12-01

    This paper provides theoretical estimates for the forces of electrostatic interaction between adjacent stereocilia in auditory and vestibular hair cells. Estimates are given for parameters within the measured physiological range using constraints appropriate for the known geometry of the hair bundle. Stereocilia are assumed to possess an extended, negatively charged surface coat, the glycocalyx. Different charge distribution profiles within the glycocalyx are analysed. It is shown that charged glycocalices on the apical surface of the hair cells can support spatial separation between adjacent stereocilia in the hair bundles through electrostatic repulsion between stereocilia. The charge density profile within the glycocalyx is a crucial parameter. In fact, attraction instead of repulsion between adjacent stereocilia will be observed if the charge of the glycocalyx is concentrated near the membrane of the stereocilia, thereby making this type of charge distribution unlikely. The forces of electrostatic interaction between stereocilia may influence the mechanical properties of the hair bundle and, being strongly non-linear, contribute to the non-linear phenomena that have been recorded from the periphery of the auditory and vestibular systems.

  6. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    PubMed

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  7. Demographic models reveal the shape of density dependence for a specialist insect herbivore on variable host plants.

    PubMed

    Miller, Tom E X

    2007-07-01

    1. It is widely accepted that density-dependent processes play an important role in most natural populations. However, persistent challenges in our understanding of density-dependent population dynamics include evaluating the shape of the relationship between density and demographic rates (linear, concave, convex), and identifying extrinsic factors that can mediate this relationship. 2. I studied the population dynamics of the cactus bug Narnia pallidicornis on host plants (Opuntia imbricata) that varied naturally in relative reproductive effort (RRE, the proportion of meristems allocated to reproduction), an important plant quality trait. I manipulated per-plant cactus bug densities, quantified subsequent dynamics, and fit stage-structured models to the experimental data to ask if and how density influences demographic parameters. 3. In the field experiment, I found that populations with variable starting densities quickly converged upon similar growth trajectories. In the model-fitting analyses, the data strongly supported a model that defined the juvenile cactus bug retention parameter (joint probability of surviving and not dispersing) as a nonlinear decreasing function of density. The estimated shape of this relationship shifted from concave to convex with increasing host-plant RRE. 4. The results demonstrate that host-plant traits are critical sources of variation in the strength and shape of density dependence in insects, and highlight the utility of integrated experimental-theoretical approaches for identifying processes underlying patterns of change in natural populations.

  8. Biological adaptive control model: a mechanical analogue of multi-factorial bone density adaptation.

    PubMed

    Davidson, Peter L; Milburn, Peter D; Wilson, Barry D

    2004-03-21

    The mechanism of how bone adapts to every day demands needs to be better understood to gain insight into situations in which the musculoskeletal system is perturbed. This paper offers a novel multi-factorial mathematical model of bone density adaptation which combines previous single-factor models in a single adaptation system as a means of gaining this insight. Unique aspects of the model include provision for interaction between factors and an estimation of the relative contribution of each factor. This interacting system is considered analogous to a Newtonian mechanical system and the governing response equation is derived as a linear version of the adaptation process. The transient solution to sudden environmental change is found to be exponential or oscillatory depending on the balance between cellular activation and deactivation frequencies.

  9. Tidal interactions in the expanding universe - The formation of prolate systems

    NASA Technical Reports Server (NTRS)

    Binney, J.; Silk, J.

    1979-01-01

    The study estimates the magnitude of the anisotropy that can be tidally induced in neighboring initially spherical protostructures, be they protogalaxies, protoclusters, or even uncollapsed density enhancements in the large-scale structure of the universe. It is shown that the linear analysis of tidal interactions developed by Peebles (1969) predicts that the anisotropy energy of a perturbation grows to first order in a small dimensionless parameter, whereas the net angular momentum acquired is of second order. A simple model is presented for the growth of anisotropy by tidal interactions during the nonlinear stage of the development of perturbations. A possible observational test is described of the alignment predicted by the model between the orientations of large-scale perturbations and the positions of neighboring density enhancements.

  10. Influence of a density increase on the evolution of the Kelvin-Helmholtz instability and vortices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amerstorfer, U. V.; Erkaev, N. V.; Institute of Computational Modelling, 660036 Krasnoyarsk

    2010-07-15

    Results of two-dimensional nonlinear numerical simulations of the magnetohydrodynamic Kelvin-Helmholtz instability are presented. A boundary layer of a certain width is assumed, which separates the plasma in the upper layer from the plasma in the lower layer. A special focus is given on the influence of a density increase toward the lower layer. The evolution of the Kelvin-Helmholtz instability can be divided into three different phases, namely, a linear growth phase at the beginning, followed by a nonlinear phase with regular structures of the vortices, and finally, a turbulent phase with nonregular structures. The spatial scales of the vortices aremore » about five times the initial width of the boundary layer. The considered configuration is similar to the situation around unmagnetized planets, where the solar wind (upper plasma layer) streams past the ionosphere (lower plasma layer), and thus the plasma density increases toward the planet. The evolving vortices might detach around the terminator of the planet and eventually so-called plasma clouds might be formed, through which ionospheric material can be lost. For the special case of a Venus-like planet, loss rates are estimated, which are of the order of estimated loss rates from observations at Venus.« less

  11. Diffusive charge transport in graphene

    NASA Astrophysics Data System (ADS)

    Chen, Jianhao

    The physical mechanisms limiting the mobility of graphene on SiO 2 are studied and printed graphene devices on a flexible substrate are realized. Intentional addition of charged scattering impurities is used to study the effects of charged impurities. Atomic-scale defects are created by noble-gas ions irradiation to study the effect of unitary scatterers. The results show that charged impurities and atomic-scale defects both lead to conductivity linear in density in graphene, with a scattering magnitude that agrees quantitatively with theoretical estimates. While charged impurities cause intravalley scattering and induce a small change in the minimum conductivity, defects in graphene scatter electrons between the valleys and suppress the minimum conductivity below the metallic limit. Temperature-dependent measurements show that longitudinal acoustic phonons in graphene produce a small resistivity which is linear in temperature and independent of carrier density; at higher temperatures, polar optical phonons of the SiO2 substrate give rise to an activated, carrier density-dependent resistivity. Graphene is also made into high mobility transparent and flexible field effect device via the transfer-printing method. Together the results paint a complete picture of charge carrier transport in graphene on SiO2 in the diffusive regime, and show the promise of graphene as a novel electronic material that have potential applications not only on conventional inorganic substrates, but also on flexible substrates.

  12. Development and validation of a subject-specific finite element model of the functional spinal unit to predict vertebral strength.

    PubMed

    Lee, Chu-Hee; Landham, Priyan R; Eastell, Richard; Adams, Michael A; Dolan, Patricia; Yang, Lang

    2017-09-01

    Finite element models of an isolated vertebral body cannot accurately predict compressive strength of the spinal column because, in life, compressive load is variably distributed across the vertebral body and neural arch. The purpose of this study was to develop and validate a patient-specific finite element model of a functional spinal unit, and then use the model to predict vertebral strength from medical images. A total of 16 cadaveric functional spinal units were scanned and then tested mechanically in bending and compression to generate a vertebral wedge fracture. Before testing, an image processing and finite element analysis framework (SpineVox-Pro), developed previously in MATLAB using ANSYS APDL, was used to generate a subject-specific finite element model with eight-node hexahedral elements. Transversely isotropic linear-elastic material properties were assigned to vertebrae, and simple homogeneous linear-elastic properties were assigned to the intervertebral disc. Forward bending loading conditions were applied to simulate manual handling. Results showed that vertebral strengths measured by experiment were positively correlated with strengths predicted by the functional spinal unit finite element model with von Mises or Drucker-Prager failure criteria ( R 2  = 0.80-0.87), with areal bone mineral density measured by dual-energy X-ray absorptiometry ( R 2  = 0.54) and with volumetric bone mineral density from quantitative computed tomography ( R 2  = 0.79). Large-displacement non-linear analyses on all specimens did not improve predictions. We conclude that subject-specific finite element models of a functional spinal unit have potential to estimate the vertebral strength better than bone mineral density alone.

  13. TU-FG-BRB-03: Basis Vector Model Based Method for Proton Stopping Power Estimation From Experimental Dual Energy CT Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S; Politte, D; O’Sullivan, J

    2016-06-15

    Purpose: This work aims at reducing the uncertainty in proton stopping power (SP) estimation by a novel combination of a linear, separable basis vector model (BVM) for stopping power calculation (Med Phys 43:600) and a statistical, model-based dual-energy CT (DECT) image reconstruction algorithm (TMI 35:685). The method was applied to experimental data. Methods: BVM assumes the photon attenuation coefficients, electron densities, and mean excitation energies (I-values) of unknown materials can be approximated by a combination of the corresponding quantities of two reference materials. The DECT projection data for a phantom with 5 different known materials was collected on a Philipsmore » Brilliance scanner using two scans at 90 kVp and 140 kVp. The line integral alternating minimization (LIAM) algorithm was used to recover the two BVM coefficient images using the measured source spectra. The proton stopping powers are then estimated from the Bethe-Bloch equation using electron densities and I-values derived from the BVM coefficients. The proton stopping powers and proton ranges for the phantom materials estimated via our BVM based DECT method are compared to ICRU reference values and a post-processing DECT analysis (Yang PMB 55:1343) applied to vendorreconstructed images using the Torikoshi parametric fit model (tPFM). Results: For the phantom materials, the average stopping power estimations for 175 MeV protons derived from our method are within 1% of the ICRU reference values (except for Teflon with a 1.48% error), with an average standard deviation of 0.46% over pixels. The resultant proton ranges agree with the reference values within 2 mm. Conclusion: Our principled DECT iterative reconstruction algorithm, incorporating optimal beam hardening and scatter corrections, in conjunction with a simple linear BVM model, achieves more accurate and robust proton stopping power maps than the post-processing, nonlinear tPFM based DECT analysis applied to conventional reconstructions of low and high energy scans. Funding Support: NIH R01CA 75371; NCI grant R01 CA 149305.« less

  14. Estimation of body density based on hydrostatic weighing without head submersion in young Japanese adults.

    PubMed

    Demura, S; Sato, S; Kitabayashi, T

    2006-06-01

    This study examined a method of predicting body density based on hydrostatic weighing without head submersion (HWwithoutHS). Donnelly and Sintek (1984) developed a method to predict body density based on hydrostatic weight without head submersion. This method predicts the difference (D) between HWwithoutHS and hydrostatic weight with head submersion (HWwithHS) from anthropometric variables (head length and head width), and then calculates body density using D as a correction factor. We developed several prediction equations to estimate D based on head anthropometry and differences between the sexes, and compared their prediction accuracy with Donnelly and Sintek's equation. Thirty-two males and 32 females aged 17-26 years participated in the study. Multiple linear regression analysis was performed to obtain the prediction equations, and the systematic errors of their predictions were assessed by Bland-Altman plots. The best prediction equations obtained were: Males: D(g) = -164.12X1 - 125.81X2 - 111.03X3 + 100.66X4 + 6488.63, where X1 = head length (cm), X2 = head circumference (cm), X3 = head breadth (cm), X4 = head thickness (cm) (R = 0.858, R2 = 0.737, adjusted R2 = 0.687, standard error of the estimate = 224.1); Females: D(g) = -156.03X1 - 14.03X2 - 38.45X3 - 8.87X4 + 7852.45, where X1 = head circumference (cm), X2 = body mass (g), X3 = head length (cm), X4 = height (cm) (R = 0.913, R2 = 0.833, adjusted R2 = 0.808, standard error of the estimate = 137.7). The effective predictors in these prediction equations differed from those of Donnelly and Sintek's equation, and head circumference and head length were included in both equations. The prediction accuracy was improved by statistically selecting effective predictors. Since we did not assess cross-validity, the equations cannot be used to generalize to other populations, and further investigation is required.

  15. Azimuthal anisotropy distributions in high-energy collisions

    NASA Astrophysics Data System (ADS)

    Yan, Li; Ollitrault, Jean-Yves; Poskanzer, Arthur M.

    2015-03-01

    Elliptic flow in ultrarelativistic heavy-ion collisions results from the hydrodynamic response to the spatial anisotropy of the initial density profile. A long-standing problem in the interpretation of flow data is that uncertainties in the initial anisotropy are mingled with uncertainties in the response. We argue that the non-Gaussianity of flow fluctuations in small systems with large fluctuations can be used to disentangle the initial state from the response. We apply this method to recent measurements of anisotropic flow in Pb+Pb and p+Pb collisions at the LHC, assuming linear response to the initial anisotropy. The response coefficient is found to decrease as the system becomes smaller and is consistent with a low value of the ratio of viscosity over entropy of η / s ≃ 0.19. Deviations from linear response are studied. While they significantly change the value of the response coefficient they do not change the rate of decrease with centrality. Thus, we argue that the estimate of η / s is robust against non-linear effects.

  16. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  17. Causes of systematic over- or underestimation of low streamflows by use of index-streamgage approaches in the United States

    USGS Publications Warehouse

    Eng, K.; Kiang, J.E.; Chen, Y.-Y.; Carlisle, D.M.; Granato, G.E.

    2011-01-01

    Low-flow characteristics can be estimated by multiple linear regressions or the index-streamgage approach. The latter transfers streamflow information from a hydrologically similar, continuously gaged basin ('index streamgage') to one with a very limited streamflow record, but often results in biased estimates. The application of the index-streamgage approach can be generalized into three steps: (1) selection of streamflow information of interest, (2) definition of hydrologic similarity and selection of index streamgage, and (3) application of an information-transfer approach. Here, we explore the effects of (1) the range of streamflow values, (2) the areal density of streamgages, and (3) index-streamgage selection criteria on the bias of three information-transfer approaches on estimates of the 7-day, 10-year minimum streamflow (Q7, 10). The three information-transfer approaches considered are maintenance of variance extension, base-flow correlation, and ratio of measured to concurrent gaged streamflow (Q-ratio invariance). Our results for 1120 streamgages throughout the United States suggest that only a small portion of the total bias in estimated streamflow values is explained by the areal density of the streamgages and the hydrologic similarity between the two basins. However, restricting the range of streamflow values used in the index-streamgage approach reduces the bias of estimated Q7, 10 values substantially. Importantly, estimated Q7, 10 values are heavily biased when the observed Q7, 10 values are near zero. Results of the analysis also showed that Q7, 10 estimates from two of the three index-streamgage approaches have lower root-mean-square error values than estimates derived from multiple regressions for the large regions considered in this study.

  18. Temporal variations of potential fecundity of southern blue whiting (Micromesistius australis australis) in the Southeast Pacific

    NASA Astrophysics Data System (ADS)

    Flores, Andrés; Wiff, Rodrigo; Díaz, Eduardo; Carvajal, Bernardita

    2017-08-01

    Fecundity is a key aspect of fish species reproductive biology because it relates directly to total egg production. Yet, despite such importance, fecundity estimates are lacking or scarce for several fish species. The gravimetric method is the most-used one to estimate fecundity by essentially scaling up the oocyte density to the ovary weight. It is a relatively simple and precise technique, but also time consuming because it requires counting all oocytes in an ovary subsample. The auto-diametric method, on the other hand, is relatively new for estimating fecundity, representing a rapid alternative, because it requires only an estimation of mean oocyte density from mean oocyte diameter. Using the extensive database available from commercial fishery and design surveys for southern blue whiting Micromesistius australis australis in the Southeast Pacific, we compared estimates of fecundity using both gravimetric and auto-diametric methods. Temporal variations in potential fecundity from the auto-diametric method were evaluated using generalised linear models considering predictors from maternal characteristics such as female size, condition factor, oocyte size, and gonadosomatic index. A global and time-invariant auto-diametric equation was evaluated using a simulation procedure based on non-parametric bootstrap. Results indicated there were not significant differences regarding fecundity estimates between the gravimetric and auto-diametric method (p > 0.05). Simulation showed the application of a global equation is unbiased and sufficiently precise to estimate time-invariant fecundity of this species. Temporal variations on fecundity were explained by maternal characteristic, revealing signals of fecundity down-regulation. We discuss how oocyte size and nutritional condition (measured as condition factor) are one of the important factors determining fecundity. We highlighted also the relevance of choosing the appropriate sampling period to conduct maturity studies and ensure precise estimates of fecundity of this species.

  19. Trend analysis of tropospheric NO2 column density over East Asia during 2000-2010: multi-satellite observations and model simulations with the updated REAS emission inventory

    NASA Astrophysics Data System (ADS)

    Itahashi, S.; Uno, I.; Irie, H.; Kurokawa, J.; Ohara, T.

    2013-04-01

    Satellite observations of the tropospheric NO2 vertical column density (VCD) are closely correlated to surface NOx emissions and can thus be used to estimate the latter. In this study, the NO2 VCDs simulated by a regional chemical transport model with data from the updated Regional Emission inventory in ASia (REAS) version 2.1 were validated by comparison with multi-satellite observations (GOME, SCIAMACHY, GOME-2, and OMI) between 2000 and 2010. Rapid growth in NO2 VCD driven by expansion of anthropogenic NOx emissions was revealed above the central eastern China region, except during the economic downturn. In contrast, slightly decreasing trends were captured above Japan. The modeled NO2 VCDs using the updated REAS emissions reasonably reproduced the annual trends observed by multi-satellites, suggesting that the NOx emissions growth rate estimated by the updated inventory is robust. On the basis of the close linear relationship of modeled NO2 VCD, observed NO2 VCD, and anthropogenic NOx emissions, the NOx emissions in 2009 and 2010 were estimated. It was estimated that the NOx emissions from anthropogenic sources in China beyond doubled between 2000 and 2010, reflecting the strong growth of anthropogenic emissions in China with the rapid recovery from the economic downturn during late 2008 and mid-2009.

  20. Effects of Buffer Size and Shape on Associations between the Built Environment and Energy Balance

    PubMed Central

    Berrigan, David; Hart, Jaime E.; Hipp, J. Aaron; Hoehner, Christine M.; Kerr, Jacqueline; Major, Jacqueline M.; Oka, Masayoshi; Laden, Francine

    2014-01-01

    Uncertainty in the relevant spatial context may drive heterogeneity in findings on the built environment and energy balance. To estimate the effect of this uncertainty, we conducted a sensitivity analysis defining intersection and business densities and counts within different buffer sizes and shapes on associations with self-reported walking and body mass index. Linear regression results indicated that the scale and shape of buffers influenced study results and may partly explain the inconsistent findings in the built environment and energy balance literature. PMID:24607875

  1. Electronic transport coefficients from ab initio simulations and application to dense liquid hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holst, Bastian; French, Martin; Redmer, Ronald

    2011-06-15

    Using Kubo's linear response theory, we derive expressions for the frequency-dependent electrical conductivity (Kubo-Greenwood formula), thermopower, and thermal conductivity in a strongly correlated electron system. These are evaluated within ab initio molecular dynamics simulations in order to study the thermoelectric transport coefficients in dense liquid hydrogen, especially near the nonmetal-to-metal transition region. We also observe significant deviations from the widely used Wiedemann-Franz law, which is strictly valid only for degenerate systems, and give an estimate for its valid scope of application toward lower densities.

  2. The costs of evaluating species densities and composition of snakes to assess development impacts in amazonia.

    PubMed

    Fraga, Rafael de; Stow, Adam J; Magnusson, William E; Lima, Albertina P

    2014-01-01

    Studies leading to decision-making for environmental licensing often fail to provide accurate estimates of diversity. Measures of snake diversity are regularly obtained to assess development impacts in the rainforests of the Amazon Basin, but this taxonomic group may be subject to poor detection probabilities. Recently, the Brazilian government tried to standardize sampling designs by the implementation of a system (RAPELD) to quantify biological diversity using spatially-standardized sampling units. Consistency in sampling design allows the detection probabilities to be compared among taxa, and sampling effort and associated cost to be evaluated. The cost effectiveness of detecting snakes has received no attention in Amazonia. Here we tested the effects of reducing sampling effort on estimates of species densities and assemblage composition. We identified snakes in seven plot systems, each standardised with 14 plots. The 250 m long centre line of each plot followed an altitudinal contour. Surveys were repeated four times in each plot and detection probabilities were estimated for the 41 species encountered. Reducing the number of observations, or the size of the sampling modules, caused significant loss of information on species densities and local patterns of variation in assemblage composition. We estimated the cost to find a snake as $ 120 U.S., but general linear models indicated the possibility of identifying differences in assemblage composition for half the overall survey costs. Decisions to reduce sampling effort depend on the importance of lost information to target-issues, and may not be the preferred option if there is the potential for identifying individual snake species requiring specific conservation actions. However, in most studies of human disturbance on species assemblages, it is likely to be more cost-effective to focus on other groups of organisms with higher detection probabilities.

  3. The Costs of Evaluating Species Densities and Composition of Snakes to Assess Development Impacts in Amazonia

    PubMed Central

    de Fraga, Rafael; Stow, Adam J.; Magnusson, William E.; Lima, Albertina P.

    2014-01-01

    Studies leading to decision-making for environmental licensing often fail to provide accurate estimates of diversity. Measures of snake diversity are regularly obtained to assess development impacts in the rainforests of the Amazon Basin, but this taxonomic group may be subject to poor detection probabilities. Recently, the Brazilian government tried to standardize sampling designs by the implementation of a system (RAPELD) to quantify biological diversity using spatially-standardized sampling units. Consistency in sampling design allows the detection probabilities to be compared among taxa, and sampling effort and associated cost to be evaluated. The cost effectiveness of detecting snakes has received no attention in Amazonia. Here we tested the effects of reducing sampling effort on estimates of species densities and assemblage composition. We identified snakes in seven plot systems, each standardised with 14 plots. The 250 m long centre line of each plot followed an altitudinal contour. Surveys were repeated four times in each plot and detection probabilities were estimated for the 41 species encountered. Reducing the number of observations, or the size of the sampling modules, caused significant loss of information on species densities and local patterns of variation in assemblage composition. We estimated the cost to find a snake as $ 120 U.S., but general linear models indicated the possibility of identifying differences in assemblage composition for half the overall survey costs. Decisions to reduce sampling effort depend on the importance of lost information to target-issues, and may not be the preferred option if there is the potential for identifying individual snake species requiring specific conservation actions. However, in most studies of human disturbance on species assemblages, it is likely to be more cost-effective to focus on other groups of organisms with higher detection probabilities. PMID:25147930

  4. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  5. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  6. Reproducibility of MRI-Determined Proton Density Fat Fraction Across Two Different MR Scanner Platforms

    PubMed Central

    Kang, Geraldine H.; Cruite, Irene; Shiehmorteza, Masoud; Wolfson, Tanya; Gamst, Anthony C.; Hamilton, Gavin; Bydder, Mark; Middleton, Michael S.; Sirlin, Claude B.

    2016-01-01

    Purpose To evaluate magnetic resonance imaging (MRI)-determined proton density fat fraction (PDFF) reproducibility across two MR scanner platforms and, using MR spectroscopy (MRS)-determined PDFF as reference standard, to confirm MRI-determined PDFF estimation accuracy. Materials and Methods This prospective, cross-sectional, crossover, observational pilot study was approved by an Institutional Review Board. Twenty-one subjects gave written informed consent and underwent liver MRI and MRS at both 1.5T (Siemens Symphony scanner) and 3T (GE Signa Excite HD scanner). MRI-determined PDFF was estimated using an axial 2D spoiled gradient-recalled echo sequence with low flip-angle to minimize T1 bias and six echo-times to permit correction of T2* and fat-water signal interference effects. MRS-determined PDFF was estimated using a stimulated-echo acquisition mode sequence with long repetition time to minimize T1 bias and five echo times to permit T2 correction. Interscanner reproducibility of MRI determined PDFF was assessed by correlation analysis; accuracy was assessed separately at each field strength by linear regression analysis using MRS-determined PDFF as reference standard. Results 1.5T and 3T MRI-determined PDFF estimates were highly correlated (r = 0.992). MRI-determined PDFF estimates were accurate at both 1.5T (regression slope/intercept = 0.958/−0.48) and 3T (slope/intercept = 1.020/0.925) against the MRS-determined PDFF reference. Conclusion MRI-determined PDFF estimation is reproducible and, using MRS-determined PDFF as reference standard, accurate across two MR scanner platforms at 1.5T and 3T. PMID:21769986

  7. Reproducibility of MRI-determined proton density fat fraction across two different MR scanner platforms.

    PubMed

    Kang, Geraldine H; Cruite, Irene; Shiehmorteza, Masoud; Wolfson, Tanya; Gamst, Anthony C; Hamilton, Gavin; Bydder, Mark; Middleton, Michael S; Sirlin, Claude B

    2011-10-01

    To evaluate magnetic resonance imaging (MRI)-determined proton density fat fraction (PDFF) reproducibility across two MR scanner platforms and, using MR spectroscopy (MRS)-determined PDFF as reference standard, to confirm MRI-determined PDFF estimation accuracy. This prospective, cross-sectional, crossover, observational pilot study was approved by an Institutional Review Board. Twenty-one subjects gave written informed consent and underwent liver MRI and MRS at both 1.5T (Siemens Symphony scanner) and 3T (GE Signa Excite HD scanner). MRI-determined PDFF was estimated using an axial 2D spoiled gradient-recalled echo sequence with low flip-angle to minimize T1 bias and six echo-times to permit correction of T2* and fat-water signal interference effects. MRS-determined PDFF was estimated using a stimulated-echo acquisition mode sequence with long repetition time to minimize T1 bias and five echo times to permit T2 correction. Interscanner reproducibility of MRI determined PDFF was assessed by correlation analysis; accuracy was assessed separately at each field strength by linear regression analysis using MRS-determined PDFF as reference standard. 1.5T and 3T MRI-determined PDFF estimates were highly correlated (r = 0.992). MRI-determined PDFF estimates were accurate at both 1.5T (regression slope/intercept = 0.958/-0.48) and 3T (slope/intercept = 1.020/0.925) against the MRS-determined PDFF reference. MRI-determined PDFF estimation is reproducible and, using MRS-determined PDFF as reference standard, accurate across two MR scanner platforms at 1.5T and 3T. Copyright © 2011 Wiley-Liss, Inc.

  8. Volumetric mammographic density: heritability and association with breast cancer susceptibility loci.

    PubMed

    Brand, Judith S; Humphreys, Keith; Thompson, Deborah J; Li, Jingmei; Eriksson, Mikael; Hall, Per; Czene, Kamila

    2014-12-01

    Mammographic density is a strong heritable trait, but data on its genetic component are limited to area-based and qualitative measures. We studied the heritability of volumetric mammographic density ascertained by a fully-automated method and the association with breast cancer susceptibility loci. Heritability of volumetric mammographic density was estimated with a variance component model in a sib-pair sample (N pairs = 955) of a Swedish screening based cohort. Associations with 82 established breast cancer loci were assessed in an independent sample of the same cohort (N = 4025 unrelated women) using linear models, adjusting for age, body mass index, and menopausal status. All tests were two-sided, except for heritability analyses where one-sided tests were used. After multivariable adjustment, heritability estimates (standard error) for percent dense volume, absolute dense volume, and absolute nondense volume were 0.63 (0.06) and 0.43 (0.06) and 0.61 (0.06), respectively (all P < .001). Percent and absolute dense volume were associated with rs10995190 (ZNF365; P = 9.0 × 10(-6) and 8.9 × 10(-7), respectively) and rs9485372 (TAB2; P = 1.8 × 10(-5) and 1.8 × 10(-3), respectively). We also observed associations of rs9383938 (ESR1) and rs2046210 (ESR1) with the absolute dense volume (P = 2.6 × 10(-4) and 4.6 × 10(-4), respectively), and rs6001930 (MLK1) and rs17356907 (NTN4) with the absolute nondense volume (P = 6.7 × 10(-6) and 8.4 × 10(-5), respectively). Our results support the high heritability of mammographic density, though estimates are weaker for absolute than percent dense volume. We also demonstrate that the shared genetic component with breast cancer is not restricted to dense tissues only. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Vocalization behavior and response of black rails

    USGS Publications Warehouse

    Legare, M.L.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis) (n = 43, 26 males, 17 females) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used regression coefficients from logistic regression equations to model the probability of a response conditional to the birds' sex, nesting status, distance to playback source, and the time of survey. With a probability of 0.811, non-nesting male black rails were most likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). Linear regression was used to determine daily, monthly, and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the linear regression model were month (F = 3.89, df = 3, p = 0.0140), year (F = 9.37, df = 2, p = 0.0003), temperature (F = 5.44, df=1, p = 0.0236), and month*year (F = 2.69, df = 5, p = 0.0311). The model was highly significant (p < 0.0001) and explained 53% of the variation of mean response per survey period (R2 = 0.5353). Response probability data obtained from the radio-tagged black rails and data from the weekly playback survey route were combined to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. Density estimates for black rails may be obtained from playback surveys, and fixed radius circular plots. Circular plots should be considered as having a radius of 80 m and be located so the plot centers are 150 m apart. Playback tapes should contain one series of Kic-kic-kerr and Growl vocalizations recorded within the same geographic region as the study area. Surveys should be conducted from 0-2 hours after sunrise or 0-2 hours before sunset, during the pre-nesting season, and when wind velocity is < 20 kph. Observers should listen for 3-4 minutes after playing the survey tape and record responses heard during that time. Observers should be trained to identify black rail vocalizations and should have acceptable hearing ability. Given the number of variables that may have large effects on the response behavior of black rails to tape playback, we recommend that future studies using playback surveys should be cautious when presenting estimates of 'absolute' density. Though results did account for variation in response behavior, we believe that additional variation in vocal response between sites, with breeding status, and bird density remains in question. Playback surveys along fixed routes providing a simple index of abundance would be useful to monitor populations over large geographic areas, and over time. Considering the limitations of most agency resources for webless waterbirds, index surveys may be more appropriate. Future telemetry studies of this type on other species and at other sites would be useful to calibrate information obtained from playback surveys whether reporting an index of abundance or density estimate.

  10. PET Pharmacokinetic Modelling

    NASA Astrophysics Data System (ADS)

    Müller-Schauenburg, Wolfgang; Reimold, Matthias

    Positron Emission Tomography is a well-established technique that allows imaging and quantification of tissue properties in-vivo. The goal of pharmacokinetic modelling is to estimate physiological parameters, e.g. perfusion or receptor density from the measured time course of a radiotracer. After a brief overview of clinical application of PET, we summarize the fundamentals of modelling: distribution volume, Fick's principle of local balancing, extraction and perfusion, and how to calculate equilibrium data from measurements after bolus injection. Three fundamental models are considered: (i) the 1-tissue compartment model, e.g. for regional cerebral blood flow (rCBF) with the short-lived tracer [15O]water, (ii) the 2-tissue compartment model accounting for trapping (one exponential + constant), e.g. for glucose metabolism with [18F]FDG, (iii) the reversible 2-tissue compartment model (two exponentials), e.g. for receptor binding. Arterial blood sampling is required for classical PET modelling, but can often be avoided by comparing regions with specific binding with so called reference regions with negligible specific uptake, e.g. in receptor imaging. To estimate the model parameters, non-linear least square fits are the standard. Various linearizations have been proposed for rapid parameter estimation, e.g. on a pixel-by-pixel basis, for the prize of a bias. Such linear approaches exist for all three models; e.g. the PATLAK-plot for trapping substances like FDG, and the LOGAN-plot to obtain distribution volumes for reversibly binding tracers. The description of receptor modelling is dedicated to the approaches of the subsequent lecture (chapter) of Millet, who works in the tradition of Delforge with multiple-injection investigations.

  11. Technical Note: Statistical dependences between channels in radiochromic film readings. Implications in multichannel dosimetry.

    PubMed

    González-López, Antonio; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen

    2016-05-01

    This note studies the statistical relationships between color channels in radiochromic film readings with flatbed scanners. The same relationships are studied for noise. Finally, their implications for multichannel film dosimetry are discussed. Radiochromic films exposed to wedged fields of 6 MV energy were read in a flatbed scanner. The joint histograms of pairs of color channels were used to obtain the joint and conditional probability density functions between channels. Then, the conditional expectations and variances of one channel given another channel were obtained. Noise was extracted from film readings by means of a multiresolution analysis. Two different dose ranges were analyzed, the first one ranging from 112 to 473 cGy and the second one from 52 to 1290 cGy. For the smallest dose range, the conditional expectations of one channel given another channel can be approximated by linear functions, while the conditional variances are fairly constant. The slopes of the linear relationships between channels can be used to simplify the expression that estimates the dose by means of the multichannel method. The slopes of the linear relationships between each channel and the red one can also be interpreted as weights in the final contribution to dose estimation. However, for the largest dose range, the conditional expectations of one channel given another channel are no longer linear functions. Finally, noises in different channels were found to correlate weakly. Signals present in different channels of radiochromic film readings show a strong statistical dependence. By contrast, noise correlates weakly between channels. For the smallest dose range analyzed, the linear behavior between the conditional expectation of one channel given another channel can be used to simplify calculations in multichannel film dosimetry.

  12. A land use regression model for ambient ultrafine particles in Montreal, Canada: A comparison of linear regression and a machine learning approach.

    PubMed

    Weichenthal, Scott; Ryswyk, Keith Van; Goldstein, Alon; Bagg, Scott; Shekkarizfard, Maryam; Hatzopoulou, Marianne

    2016-04-01

    Existing evidence suggests that ambient ultrafine particles (UFPs) (<0.1µm) may contribute to acute cardiorespiratory morbidity. However, few studies have examined the long-term health effects of these pollutants owing in part to a need for exposure surfaces that can be applied in large population-based studies. To address this need, we developed a land use regression model for UFPs in Montreal, Canada using mobile monitoring data collected from 414 road segments during the summer and winter months between 2011 and 2012. Two different approaches were examined for model development including standard multivariable linear regression and a machine learning approach (kernel-based regularized least squares (KRLS)) that learns the functional form of covariate impacts on ambient UFP concentrations from the data. The final models included parameters for population density, ambient temperature and wind speed, land use parameters (park space and open space), length of local roads and rail, and estimated annual average NOx emissions from traffic. The final multivariable linear regression model explained 62% of the spatial variation in ambient UFP concentrations whereas the KRLS model explained 79% of the variance. The KRLS model performed slightly better than the linear regression model when evaluated using an external dataset (R(2)=0.58 vs. 0.55) or a cross-validation procedure (R(2)=0.67 vs. 0.60). In general, our findings suggest that the KRLS approach may offer modest improvements in predictive performance compared to standard multivariable linear regression models used to estimate spatial variations in ambient UFPs. However, differences in predictive performance were not statistically significant when evaluated using the cross-validation procedure. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  13. Technical Note: Statistical dependences between channels in radiochromic film readings. Implications in multichannel dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    González-López, Antonio, E-mail: antonio.gonzalez7@carm.es; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen

    Purpose: This note studies the statistical relationships between color channels in radiochromic film readings with flatbed scanners. The same relationships are studied for noise. Finally, their implications for multichannel film dosimetry are discussed. Methods: Radiochromic films exposed to wedged fields of 6 MV energy were read in a flatbed scanner. The joint histograms of pairs of color channels were used to obtain the joint and conditional probability density functions between channels. Then, the conditional expectations and variances of one channel given another channel were obtained. Noise was extracted from film readings by means of a multiresolution analysis. Two different dosemore » ranges were analyzed, the first one ranging from 112 to 473 cGy and the second one from 52 to 1290 cGy. Results: For the smallest dose range, the conditional expectations of one channel given another channel can be approximated by linear functions, while the conditional variances are fairly constant. The slopes of the linear relationships between channels can be used to simplify the expression that estimates the dose by means of the multichannel method. The slopes of the linear relationships between each channel and the red one can also be interpreted as weights in the final contribution to dose estimation. However, for the largest dose range, the conditional expectations of one channel given another channel are no longer linear functions. Finally, noises in different channels were found to correlate weakly. Conclusions: Signals present in different channels of radiochromic film readings show a strong statistical dependence. By contrast, noise correlates weakly between channels. For the smallest dose range analyzed, the linear behavior between the conditional expectation of one channel given another channel can be used to simplify calculations in multichannel film dosimetry.« less

  14. Wood Specific Gravity Variations and Biomass of Central African Tree Species: The Simple Choice of the Outer Wood

    PubMed Central

    Bastin, Jean-François; Fayolle, Adeline; Tarelkin, Yegor; Van den Bulcke, Jan; de Haulleville, Thales; Mortier, Frederic; Beeckman, Hans; Van Acker, Joris; Serckx, Adeline; Bogaert, Jan; De Cannière, Charles

    2015-01-01

    Context Wood specific gravity is a key element in tropical forest ecology. It integrates many aspects of tree mechanical properties and functioning and is an important predictor of tree biomass. Wood specific gravity varies widely among and within species and also within individual trees. Notably, contrasted patterns of radial variation of wood specific gravity have been demonstrated and related to regeneration guilds (light demanding vs. shade-bearing). However, although being repeatedly invoked as a potential source of error when estimating the biomass of trees, both intraspecific and radial variations remain little studied. In this study we characterized detailed pith-to-bark wood specific gravity profiles among contrasted species prominently contributing to the biomass of the forest, i.e., the dominant species, and we quantified the consequences of such variations on the biomass. Methods Radial profiles of wood density at 8% moisture content were compiled for 14 dominant species in the Democratic Republic of Congo, adapting a unique 3D X-ray scanning technique at very high spatial resolution on core samples. Mean wood density estimates were validated by water displacement measurements. Wood density profiles were converted to wood specific gravity and linear mixed models were used to decompose the radial variance. Potential errors in biomass estimation were assessed by comparing the biomass estimated from the wood specific gravity measured from pith-to-bark profiles, from global repositories, and from partial information (outer wood or inner wood). Results Wood specific gravity profiles from pith-to-bark presented positive, neutral and negative trends. Positive trends mainly characterized light-demanding species, increasing up to 1.8 g.cm-3 per meter for Piptadeniastrum africanum, and negative trends characterized shade-bearing species, decreasing up to 1 g.cm-3 per meter for Strombosia pustulata. The linear mixed model showed the greater part of wood specific gravity variance was explained by species only (45%) followed by a redundant part between species and regeneration guilds (36%). Despite substantial variation in wood specific gravity profiles among species and regeneration guilds, we found that values from the outer wood were strongly correlated to values from the whole profile, without any significant bias. In addition, we found that wood specific gravity from the DRYAD global repository may strongly differ depending on the species (up to 40% for Dialium pachyphyllum). Main Conclusion Therefore, when estimating forest biomass in specific sites, we recommend the systematic collection of outer wood samples on dominant species. This should prevent the main errors in biomass estimations resulting from wood specific gravity and allow for the collection of new information to explore the intraspecific variation of mechanical properties of trees. PMID:26555144

  15. Wood Specific Gravity Variations and Biomass of Central African Tree Species: The Simple Choice of the Outer Wood.

    PubMed

    Bastin, Jean-François; Fayolle, Adeline; Tarelkin, Yegor; Van den Bulcke, Jan; de Haulleville, Thales; Mortier, Frederic; Beeckman, Hans; Van Acker, Joris; Serckx, Adeline; Bogaert, Jan; De Cannière, Charles

    2015-01-01

    Wood specific gravity is a key element in tropical forest ecology. It integrates many aspects of tree mechanical properties and functioning and is an important predictor of tree biomass. Wood specific gravity varies widely among and within species and also within individual trees. Notably, contrasted patterns of radial variation of wood specific gravity have been demonstrated and related to regeneration guilds (light demanding vs. shade-bearing). However, although being repeatedly invoked as a potential source of error when estimating the biomass of trees, both intraspecific and radial variations remain little studied. In this study we characterized detailed pith-to-bark wood specific gravity profiles among contrasted species prominently contributing to the biomass of the forest, i.e., the dominant species, and we quantified the consequences of such variations on the biomass. Radial profiles of wood density at 8% moisture content were compiled for 14 dominant species in the Democratic Republic of Congo, adapting a unique 3D X-ray scanning technique at very high spatial resolution on core samples. Mean wood density estimates were validated by water displacement measurements. Wood density profiles were converted to wood specific gravity and linear mixed models were used to decompose the radial variance. Potential errors in biomass estimation were assessed by comparing the biomass estimated from the wood specific gravity measured from pith-to-bark profiles, from global repositories, and from partial information (outer wood or inner wood). Wood specific gravity profiles from pith-to-bark presented positive, neutral and negative trends. Positive trends mainly characterized light-demanding species, increasing up to 1.8 g.cm-3 per meter for Piptadeniastrum africanum, and negative trends characterized shade-bearing species, decreasing up to 1 g.cm-3 per meter for Strombosia pustulata. The linear mixed model showed the greater part of wood specific gravity variance was explained by species only (45%) followed by a redundant part between species and regeneration guilds (36%). Despite substantial variation in wood specific gravity profiles among species and regeneration guilds, we found that values from the outer wood were strongly correlated to values from the whole profile, without any significant bias. In addition, we found that wood specific gravity from the DRYAD global repository may strongly differ depending on the species (up to 40% for Dialium pachyphyllum). Therefore, when estimating forest biomass in specific sites, we recommend the systematic collection of outer wood samples on dominant species. This should prevent the main errors in biomass estimations resulting from wood specific gravity and allow for the collection of new information to explore the intraspecific variation of mechanical properties of trees.

  16. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    2001-10-01

    - The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  17. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  18. Trial aerial survey of sea otters in Prince William Sound, Alaska, 1993. Restoration project 93043-2. Exxon Valdez oil spill restoration project final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bodkin, J.L.; Udevitz, M.S.

    1996-05-01

    We developed an aerial survey method for sea otters, using a strip transect design where otters observed in a strip along one side of the aircraft are counted. Two strata are sampled, one lies close to shore and/or in shallow. The other strata lies offshore and over deeper water. We estimate the proportion of otters not seen by the observer by conducting intensive searches of units (ISU`s) within strips when otters are observed. The first study found no significant differences in sea otter detection probabilities between ISU`s initiated by the sighting of an otter group compared to systematically located ISU`s.more » The second study consisted of a trial survey of all of Prince William Sound, excluding Orca Inlet. The survey area consisted of 5,017 sq km of water between the shore line and an offshore boundary based on shoreline physiography, the 100 m depth contour or a distance of 2 km from the shore. From 5-13 August 1993, two observers surveyed 1,023 linear km of high density sea otter habitat and 355 linear km of low density habitat.« less

  19. Electronic structure and linear optical properties of ZnSe and ZnSe:Mn.

    PubMed

    Su, Kang; Wang, Yuhua

    2010-03-01

    As an important wide band-gap II-VI semiconductor, ZnSe has attracted much attention for its various applications in photo-electronic devices such as blue light-emitting diodes and blue-green diode lasers. Mn-doped ZnSe is an excellent quantum dot material. The electronic structures of the sphalerite ZnSe and ZnSe:Mn were calculated using the Vienna ab initio Simulation Package with ultra-soft pseudo potentials and Material Studio. The calculated equilibrium lattice constants agree well with the experimental values. Using the optimized equilibrium lattice constants, the densities of states and energy band structures were further calculated. By analyzing the partial densities of states, the contributions of different electron states in different atoms were estimated. The p states of Zn mostly contribute to the top of the valence band, and the s states of Zn and the s states of Se have major effects on the bottom of the conduction band. The calculated results of ZnSe:Mn show the band gap was changed from 2.48 to 1.1 eV. The calculated linear optical properties, such as refractive index and absorption spectrum, are in good agreement with experimental values.

  20. Interplay between Self-Assembled Structures and Energy Level Alignment of Benzenediamine on Au(111) Surfaces

    NASA Astrophysics Data System (ADS)

    Li, Guo; Neaton, Jeffrey

    2015-03-01

    Using van der Waals-corrected density functional theory (DFT) calculations, we study the adsorption of benzene-diamine (BDA) molecules on Au(111) surfaces. We find that at low surface coverage, the adsorbed molecules prefer to stay isolated from each other in a monomer phase, due to the inter-molecular dipole-dipole repulsions. However, when the coverage rises above a critical value of 0.9nm-2, the adsorbed molecules aggregate into linear structures via hydrogen bonding between amine groups, consistent with recent experiments [Haxton, Zhou, Tamblyn, et al, Phys. Rev. Lett. 111, 265701 (2013)]. Moreover, we find that these linear structures at high density considerably reduces the Au work function (relative to a monomer phase). Due to reduced surface polarization effects, we estimate that the resonance energy of the highest occupied molecular orbital of the adsorbed BDA molecule relative to the Au Fermi level is significantly lower than the monomer phase by more than 0.5 eV, consistent with the experimental measurements [DellAngela, Kladnik, and Cossaro, et al., Nano Lett. 10, 2470 (2010)]. This work supported by DOE (the JCAP under Award Number DE-SC000499 and the Molecular Foundry of LBNL), and computational resources provided by NERSC.

  1. In situ densimetric measurements as a surrogate for suspended-sediment concentrations in the Rio Puerco, New Mexico

    USGS Publications Warehouse

    Brown, Jeb E.; Gray, John R.; Hornewer, Nancy J.

    2015-01-01

    Surrogate measurements of suspended-sediment concentration (SSC) are increasingly used to provide continuous, high-resolution, and demonstrably accurate data at a reasonable cost. Densimetric data, calculated from the difference between two in situ pressure measurements, exploit variations in real-time streamflow densities to infer SSCs. Unlike other suspendedsediment surrogate technologies based on bulk or digital optics, laser, or hydroacoustics, the accuracy of SSC data estimated using the pressure-difference (also referred to as densimetric) surrogate technology theoretically improves with increasing SCCs. Coupled with streamflow data, continuous suspended-sediment discharges can be calculated using SSC data estimated in real-time using the densimetric technology. The densimetric technology was evaluated at the Rio Puerco in New Mexico, a stream where SSC values regularly range from 10,000-200,000 milligrams per liter (mg/L) and have exceeded 500,000 mg/L. The constant-flow dual-orifice bubbler measures pressure using two precision pressure-transducer sensors at vertically aligned fixed locations in a water column. Water density is calculated from the temperature-compensated differential pressure and SSCs are inferred from the density data. A linear regression model comparing density values to field-measured SSC values yielded an R² of 0.74. Although the application of the densimetric surrogate is likely limited to fluvial systems with SSCs larger than about 10,000 mg/L, based on this and previous studies, the densimetric technology fills a void for monitoring streams with high SSCs.

  2. Frequency domain system identification of helicopter rotor dynamics incorporating models with time periodic coefficients

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan

    1997-08-01

    One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.

  3. 2D imaging X-ray diagnostic for measuring the current density distribution in a wide-area electron beam produced in a multiaperture diode with plasma cathode

    NASA Astrophysics Data System (ADS)

    Kurkuchekov, V.; Kandaurov, I.; Trunev, Y.

    2018-05-01

    A simple and inexpensive X-ray diagnostic tool was designed for measuring the cross-sectional current density distribution in a low-relativistic pulsed electron beam produced in a source based on an arc-discharge plasma cathode and multiaperture diode-type electron optical system. The beam parameters were as follows: Uacc = 50–110 kV, Ibeam = 20–100 A, τbeam = 0.1–0.3 ms. The beam effective diameter was ca. 7 cm. Based on a pinhole camera, the diagnostic allows one to obtain a 2D profile of electron beam flux distribution on a flat metal target in a single shot. The linearity of the diagnostic system response to the electron flux density was established experimentally. Spatial resolution of the diagnostic was also estimated in special test experiments. The optimal choice of the main components of the diagnostic technique is discussed.

  4. Estimation of the probability of success in petroleum exploration

    USGS Publications Warehouse

    Davis, J.C.

    1977-01-01

    A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum Publishing Corp.

  5. Investigation of a tubular dual-stator flux-switching permanent-magnet linear generator for free-piston energy converter

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Tong, Chengde; Yu, Bin; Zhu, Shaohong; Zhu, Jianguo

    2015-05-01

    This paper describes a tubular dual-stator flux-switching permanent-magnet (PM) linear generator for free-piston energy converter. The operating principle, topology, and design considerations of the machine are investigated. Combining the motion characteristic of free-piston Stirling engine, a tubular dual-stator PM linear generator is designed by finite element method. Some major structural parameters, such as the outer and inner radii of the mover, PM thickness, mover tooth width, tooth width of the outer and inner stators, etc., are optimized to improve the machine performances like thrust capability and power density. In comparison with conventional single-stator PM machines like moving-magnet linear machine and flux-switching linear machine, the proposed dual-stator flux-switching PM machine shows advantages in higher mass power density, higher volume power density, and lighter mover.

  6. Primordial black holes in linear and non-linear regimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allahyari, Alireza; Abolhasani, Ali Akbar; Firouzjaee, Javad T., E-mail: allahyari@physics.sharif.edu, E-mail: j.taghizadeh.f@ipm.ir

    We revisit the formation of primordial black holes (PBHs) in the radiation-dominated era for both linear and non-linear regimes, elaborating on the concept of an apparent horizon. Contrary to the expectation from vacuum models, we argue that in a cosmological setting a density fluctuation with a high density does not always collapse to a black hole. To this end, we first elaborate on the perturbation theory for spherically symmetric space times in the linear regime. Thereby, we introduce two gauges. This allows to introduce a well defined gauge-invariant quantity for the expansion of null geodesics. Using this quantity, we arguemore » that PBHs do not form in the linear regime irrespective of the density of the background. Finally, we consider the formation of PBHs in non-linear regimes, adopting the spherical collapse picture. In this picture, over-densities are modeled by closed FRW models in the radiation-dominated era. The difference of our approach is that we start by finding an exact solution for a closed radiation-dominated universe. This yields exact results for turn-around time and radius. It is important that we take the initial conditions from the linear perturbation theory. Additionally, instead of using uniform Hubble gauge condition, both density and velocity perturbations are admitted in this approach. Thereby, the matching condition will impose an important constraint on the initial velocity perturbations δ {sup h} {sub 0} = −δ{sub 0}/2. This can be extended to higher orders. Using this constraint, we find that the apparent horizon of a PBH forms when δ > 3 at turn-around time. The corrections also appear from the third order. Moreover, a PBH forms when its apparent horizon is outside the sound horizon at the re-entry time. Applying this condition, we infer that the threshold value of the density perturbations at horizon re-entry should be larger than δ {sub th} > 0.7.« less

  7. Stochastic modeling of macrodispersion in unsaturated heterogeneous porous media. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, T.C.J.

    1995-02-01

    Spatial heterogeneity of geologic media leads to uncertainty in predicting both flow and transport in the vadose zone. In this work an efficient and flexible, combined analytical-numerical Monte Carlo approach is developed for the analysis of steady-state flow and transient transport processes in highly heterogeneous, variably saturated porous media. The approach is also used for the investigation of the validity of linear, first order analytical stochastic models. With the Monte Carlo analysis accurate estimates of the ensemble conductivity, head, velocity, and concentration mean and covariance are obtained; the statistical moments describing displacement of solute plumes, solute breakthrough at a compliancemore » surface, and time of first exceedance of a given solute flux level are analyzed; and the cumulative probability density functions for solute flux across a compliance surface are investigated. The results of the Monte Carlo analysis show that for very heterogeneous flow fields, and particularly in anisotropic soils, the linearized, analytical predictions of soil water tension and soil moisture flux become erroneous. Analytical, linearized Lagrangian transport models also overestimate both the longitudinal and the transverse spreading of the mean solute plume in very heterogeneous soils and in dry soils. A combined analytical-numerical conditional simulation algorithm is also developed to estimate the impact of in-situ soil hydraulic measurements on reducing the uncertainty of concentration and solute flux predictions.« less

  8. Alcohol Availability and Intimate Partner Violence Among US Couples

    PubMed Central

    McKinney, Christy M.; Caetano, Raul; Harris, Theodore Robert; Ebama, Malembe S.

    2008-01-01

    Objectives We examined the relation between alcohol outlet density (the number of alcohol outlets per capita by zip code) and male-to-female partner violence (MFPV) or female-to-male partner violence (FMPV). We also investigated whether binge drinking or the presence of alcohol-related problems altered the relationship between alcohol outlet density and MFPV or FMPV. Methods We linked individual and couple sociodemographic and behavioral data from a 1995 national population-based sample of 1,597 couples to alcohol outlet data and 1990 US Census sociodemographic information. We used logistic regression for survey data to estimate unadjusted and adjusted odds ratios between alcohol outlet density and MFPV or FMPV along with 95% confidence intervals (CIs) and p-values. We used a design-based Wald test to derive a p-value for multiplicative interaction to assess the role of binge drinking and alcohol-related problems. Results In adjusted analysis, an increase of one alcohol outlet per 10,000 persons was associated with a 1.03-fold increased risk of MFPV (p-value for linear trend = 0.01) and a 1.011-fold increased risk of FMPV (p-value for linear trend = 0.48). An increase of 10 alcohol outlets per 10,000 persons was associated with 34% and 12% increased risk of MFPV and FMPV respectively, though the CI for the association with FMPV was compatible with no increased risk. The relationship between alcohol outlet density and MFPV was stronger among couples reporting alcohol-related problems than those reporting no problems (p-value for multiplicative interaction = 0.01). Conclusions We found that as alcohol outlet density increases so does the risk of MFPV and that this relationship may differ for couples who do and do not report alcohol-related problems. Given that MFPV accounts for the majority of injuries related to intimate partner violence, policy makers may wish to carefully consider the potential benefit of limiting alcohol outlet density to reduce MFPV and its adverse consequences. PMID:18976345

  9. Alcohol availability and intimate partner violence among US couples.

    PubMed

    McKinney, Christy M; Caetano, Raul; Harris, Theodore Robert; Ebama, Malembe S

    2009-01-01

    We examined the relation between alcohol outlet density (the number of alcohol outlets per capita by zip code) and male-to-female partner violence (MFPV) or female-to-male partner violence (FMPV). We also investigated whether binge drinking or the presence of alcohol-related problems altered the relationship between alcohol outlet density and MFPV or FMPV. We linked individual and couple sociodemographic and behavioral data from a 1995 national population-based sample of 1,597 couples to alcohol outlet data and 1990 US Census sociodemographic information. We used logistic regression for survey data to estimate unadjusted and adjusted odds ratios between alcohol outlet density and MFPV or FMPV along with 95% confidence intervals (CIs) and p-values. We used a design-based Wald test to derive a p-value for multiplicative interaction to assess the role of binge drinking and alcohol-related problems. In adjusted analysis, an increase of one alcohol outlet per 10,000 persons was associated with a 1.03-fold increased risk of MFPV (p-value for linear trend = 0.01) and a 1.011-fold increased risk of FMPV (p-value for linear trend = 0.48). An increase of 10 alcohol outlets per 10,000 persons was associated with 34% and 12% increased risk of MFPV and FMPV respectively, though the CI for the association with FMPV was compatible with no increased risk. The relationship between alcohol outlet density and MFPV was stronger among couples reporting alcohol-related problems than those reporting no problems (p-value for multiplicative interaction = 0.01). We found that as alcohol outlet density increases so does the risk of MFPV and that this relationship may differ for couples who do and do not report alcohol-related problems. Given that MFPV accounts for the majority of injuries related to intimate partner violence, policy makers may wish to carefully consider the potential benefit of limiting alcohol outlet density to reduce MFPV and its adverse consequences.

  10. Non-linear feeding functional responses in the Greater Flamingo (Phoenicopterus roseus) predict immediate negative impact of wetland degradation on this flagship species

    PubMed Central

    Deville, Anne-Sophie; Grémillet, David; Gauthier-Clerc, Michel; Guillemain, Matthieu; Von Houwald, Friederike; Gardelli, Bruno; Béchet, Arnaud

    2013-01-01

    Accurate knowledge of the functional response of predators to prey density is essential for understanding food web dynamics, to parameterize mechanistic models of animal responses to environmental change, and for designing appropriate conservation measures. Greater flamingos (Phoenicopterus roseus), a flagship species of Mediterranean wetlands, primarily feed on Artemias (Artemia spp.) in commercial salt pans, an industry which may collapse for economic reasons. Flamingos also feed on alternative prey such as Chironomid larvae (e.g., Chironomid spp.) and rice seeds (Oryza sativa). However, the profitability of these food items for flamingos remains unknown. We determined the functional responses of flamingos feeding on Artemias, Chironomids, or rice. Experiments were conducted on 11 captive flamingos. For each food item, we offered different ranges of food densities, up to 13 times natural abundance. Video footage allowed estimating intake rates. Contrary to theoretical predictions for filter feeders, intake rates did not increase linearly with increasing food density (type I). Intake rates rather increased asymptotically with increasing food density (type II) or followed a sigmoid shape (type III). Hence, flamingos were not able to ingest food in direct proportion to their abundance, possibly because of unique bill structure resulting in limited filtering capabilities. Overall, flamingos foraged more efficiently on Artemias. When feeding on Chironomids, birds had lower instantaneous rates of food discovery and required more time to extract food from the sediment and ingest it, than when filtering Artemias from the water column. However, feeding on rice was energetically more profitable for flamingos than feeding on Artemias or Chironomids, explaining their attraction for rice fields. Crucially, we found that food densities required for flamingos to reach asymptotic intake rates are rarely met under natural conditions. This allows us to predict an immediate negative effect of any decrease in prey density upon flamingo foraging performance. PMID:23762525

  11. Large Impact of Eurasian Lynx Predation on Roe Deer Population Dynamics

    PubMed Central

    Andrén, Henrik; Liberg, Olof

    2015-01-01

    The effects of predation on ungulate populations depend on several factors. One of the most important factors is the proportion of predation that is additive or compensatory respectively to other mortality in the prey, i.e., the relative effect of top-down and bottom-up processes. We estimated Eurasian lynx (Lynx lynx) kill rate on roe deer (Capreolus capreolus) using radio-collared lynx. Kill rate was strongly affected by lynx social status. For males it was 4.85 ± 1.30 S.E. roe deer per 30 days, for females with kittens 6.23 ± 0.83 S.E. and for solitary females 2.71 ± 0.47 S.E. We found very weak support for effects of prey density (both for Type I (linear) and Type II (non-linear) functional responses) and of season (winter, summer) on lynx kill rate. Additionally, we analysed the growth rate in a roe deer population from 1985 to 2005 in an area, which lynx naturally re-colonized in 1996. The annual roe deer growth rate was lower after lynx re-colonized the study area, but it was also negatively influenced by roe deer density. Before lynx colonized the area roe deer growth rate was λ = 1.079 (± 0.061 S.E.), while after lynx re-colonization it was λ = 0.94 (± 0.051 S.E.). Thus, the growth rate in the roe deer population decreased by Δλ = 0.14 (± 0.080 S.E.) after lynx re-colonized the study area, which corresponded to the estimated lynx predation rate on roe deer (0.11 ± 0.042 S.E.), suggesting that lynx predation was mainly additive to other mortality in roe deer. To conclude, this study suggests that lynx predation together with density dependent factors both influence the roe deer population dynamics. Thus, both top-down and bottom-up processes operated at the same time in this predator-prey system. PMID:25806949

  12. Cosmic clocks: a tight radius-velocity relationship for H I-selected galaxies

    NASA Astrophysics Data System (ADS)

    Meurer, Gerhardt R.; Obreschkow, Danail; Wong, O. Ivy; Zheng, Zheng; Audcent-Ross, Fiona M.; Hanish, D. J.

    2018-05-01

    H I-selected galaxies obey a linear relationship between their maximum detected radius Rmax and rotational velocity. This result covers measurements in the optical, ultraviolet, and H I emission in galaxies spanning a factor of 30 in size and velocity, from small dwarf irregulars to the largest spirals. Hence, galaxies behave as clocks, rotating once a Gyr at the very outskirts of their discs. Observations of a large optically selected sample are consistent, implying this relationship is generic to disc galaxies in the low redshift Universe. A linear radius-velocity relationship is expected from simple models of galaxy formation and evolution. The total mass within Rmax has collapsed by a factor of 37 compared to the present mean density of the Universe. Adopting standard assumptions, we find a mean halo spin parameter λ in the range 0.020-0.035. The dispersion in λ, 0.16 dex, is smaller than expected from simulations. This may be due to the biases in our selection of disc galaxies rather than all haloes. The estimated mass densities of stars and atomic gas at Rmax are similar (˜0.5 M⊙ pc-2), indicating outer discs are highly evolved. The gas consumption and stellar population build time-scales are hundreds of Gyr, hence star formation is not driving the current evolution of outer discs. The estimated ratio between Rmax and disc scalelength is consistent with long-standing predictions from monolithic collapse models. Hence, it remains unclear whether disc extent results from continual accretion, a rapid initial collapse, secular evolution, or a combination thereof.

  13. Improved Phase Corrections for Transoceanic Tsunami Data in Spatial and Temporal Source Estimation: Application to the 2011 Tohoku Earthquake

    NASA Astrophysics Data System (ADS)

    Ho, Tung-Cheng; Satake, Kenji; Watada, Shingo

    2017-12-01

    Systemic travel time delays of up to 15 min relative to the linear long waves for transoceanic tsunamis have been reported. A phase correction method, which converts the linear long waves into dispersive waves, was previously proposed to consider seawater compressibility, the elasticity of the Earth, and gravitational potential change associated with tsunami motion. In the present study, we improved this method by incorporating the effects of ocean density stratification, actual tsunami raypath, and actual bathymetry. The previously considered effects amounted to approximately 74% for correction of the travel time delay, while the ocean density stratification, actual raypath, and actual bathymetry, contributed to approximately 13%, 4%, and 9% on average, respectively. The improved phase correction method accounted for almost all the travel time delay at far-field stations. We performed single and multiple time window inversions for the 2011 Tohoku tsunami using the far-field data (>3 h travel time) to investigate the initial sea surface displacement. The inversion result from only far-field data was similar to but smoother than that from near-field data and all stations, including a large sea surface rise increasing toward the trench followed by a migration northward along the trench. For the forward simulation, our results showed good agreement between the observed and computed waveforms at both near-field and far-field tsunami gauges, as well as with satellite altimeter data. The present study demonstrates that the improved method provides a more accurate estimate for the waveform inversion and forward prediction of far-field data.

  14. Stabilization of electron-scale turbulence by electron density gradient in national spherical torus experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz Ruiz, J.; White, A. E.; Ren, Y.

    2015-12-15

    Theory and experiments have shown that electron temperature gradient (ETG) turbulence on the electron gyro-scale, k{sub ⊥}ρ{sub e} ≲ 1, can be responsible for anomalous electron thermal transport in NSTX. Electron scale (high-k) turbulence is diagnosed in NSTX with a high-k microwave scattering system [D. R. Smith et al., Rev. Sci. Instrum. 79, 123501 (2008)]. Here we report on stabilization effects of the electron density gradient on electron-scale density fluctuations in a set of neutral beam injection heated H-mode plasmas. We found that the absence of high-k density fluctuations from measurements is correlated with large equilibrium density gradient, which ismore » shown to be consistent with linear stabilization of ETG modes due to the density gradient using the analytical ETG linear threshold in F. Jenko et al. [Phys. Plasmas 8, 4096 (2001)] and linear gyrokinetic simulations with GS2 [M. Kotschenreuther et al., Comput. Phys. Commun. 88, 128 (1995)]. We also found that the observed power of electron-scale turbulence (when it exists) is anti-correlated with the equilibrium density gradient, suggesting density gradient as a nonlinear stabilizing mechanism. Higher density gradients give rise to lower values of the plasma frame frequency, calculated based on the Doppler shift of the measured density fluctuations. Linear gyrokinetic simulations show that higher values of the electron density gradient reduce the value of the real frequency, in agreement with experimental observation. Nonlinear electron-scale gyrokinetic simulations show that high electron density gradient reduces electron heat flux and stiffness, and increases the ETG nonlinear threshold, consistent with experimental observations.« less

  15. Weighted linear least squares estimation of diffusion MRI parameters: strengths, limitations, and pitfalls.

    PubMed

    Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben

    2013-11-01

    Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Osler, John C

    2010-12-01

    This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.

  17. Cross-sectional study to assess the association of population density with predicted breast cancer risk.

    PubMed

    Lee, Jeannette Y; Klimberg, Suzanne; Bondurant, Kristina L; Phillips, Martha M; Kadlubar, Susan A

    2014-01-01

    The Gail and CARE models estimate breast cancer risk for white and African-American (AA) women, respectively. The aims of this study were to compare metropolitan and nonmetropolitan women with respect to predicted breast cancer risks based on known risk factors, and to determine if population density was an independent risk factor for breast cancer risk. A cross-sectional survey was completed by 15,582 women between 35 and 85 years of age with no history of breast cancer. Metropolitan and nonmetropolitan women were compared with respect to risk factors, and breast cancer risk estimates, using general linear models adjusted for age. For both white and AA women, tisk factors used to estimate breast cancer risk included age at menarche, history of breast biopsies, and family history. For white women, age at first childbirth was an additional risk factor. In comparison to their nonmetropolitan counterparts, metropolitan white women were more likely to report having a breast biopsy, have family history of breast cancer, and delay childbirth. Among white metropolitan and nonmetropolitan women, mean estimated 5-year risks were 1.44% and 1.32% (p < 0.001), and lifetime risks of breast cancer were 10.81% and 10.01% (p < 0.001), respectively. AA metropolitan residents were more likely than those from nonmetropolitan areas to have had a breast biopsy. Among AA metropolitan and nonmetropolitan women, mean estimated 5-year risks were 1.16% and 1.12% (p = 0.039) and lifetime risks were 8.94%, and 8.85% (p = 0.344). Metropolitan residence was associated with higher predicted breast cancer risks for white women. Among AA women, metropolitan residence was associated with a higher predicted breast cancer risk at 5 years, but not over a lifetime. Population density was not an independent risk factor for breast cancer. © 2014 Wiley Periodicals, Inc.

  18. Nonlinear stability of solar type 3 radio bursts. 1: Theory

    NASA Technical Reports Server (NTRS)

    Smith, R. A.; Goldstein, M. L.; Papadopoulos, K.

    1978-01-01

    A theory of the excitation of solar type 3 bursts is presented. Electrons initially unstable to the linear bump-in-tail instability are shown to rapidly amplify Langmuir waves to energy densities characteristic of strong turbulence. The three-dimensional equations which describe the strong coupling (wave-wave) interactions are derived. For parameters characteristic of the interplanetary medium the equations reduce to one dimension. In this case, the oscillating two stream instability (OTSI) is the dominant nonlinear instability, and is stablized through the production of nonlinear ion density fluctuations that efficiently scatter Langmuir waves out of resonance with the electron beam. An analytical model of the electron distribution function is also developed which is used to estimate the total energy losses suffered by the electron beam as it propagates from the solar corona to 1 A.U. and beyond.

  19. On the estimation and detection of the Rees-Sciama effect

    NASA Astrophysics Data System (ADS)

    Fullana, M. J.; Arnau, J. V.; Thacker, R. J.; Couchman, H. M. P.; Sáez, D.

    2017-02-01

    Maps of the Rees-Sciama (RS) effect are simulated using the parallel N-body code, HYDRA, and a run-time ray-tracing procedure. A method designed for the analysis of small, square cosmic microwave background (CMB) maps is applied to our RS maps. Each of these techniques has been tested and successfully applied in previous papers. Within a range of angular scales, our estimate of the RS angular power spectrum due to variations in the peculiar gravitational potential on scales smaller than 42/h megaparsecs is shown to be robust. An exhaustive study of the redshifts and spatial scales relevant for the production of RS anisotropy is developed for the first time. Results from this study demonstrate that (I) to estimate the full integrated RS effect, the initial redshift for the calculations (integration) must be greater than 25, (II) the effect produced by strongly non-linear structures is very small and peaks at angular scales close to 4.3 arcmin, and (III) the RS anisotropy cannot be detected either directly-in temperature CMB maps-or by looking for cross-correlations between these maps and tracers of the dark matter distribution. To estimate the RS effect produced by scales larger than 42/h megaparsecs, where the density contrast is not strongly non-linear, high accuracy N-body simulations appear unnecessary. Simulations based on approximations such as the Zel'dovich approximation and adhesion prescriptions, for example, may be adequate. These results can be used to guide the design of future RS simulations.

  20. Home range and space use patterns of flathead catfish during the summer-fall period in two Missouri streams

    USGS Publications Warehouse

    Vokoun, Jason C.; Rabeni, Charles F.

    2005-01-01

    Flathead catfish Pylodictis olivaris were radio-tracked in the Grand River and Cuivre River, Missouri, from late July until they moved to overwintering habitats in late October. Fish moved within a definable area, and although occasional long-distance movements occurred, the fish typically returned to the previously occupied area. Seasonal home range was calculated with the use of kernel density estimation, which can be interpreted as a probabilistic utilization distribution that documents the internal structure of the estimate by delineating portions of the range that was used a specified percentage of the time. A traditional linear range also was reported. Most flathead catfish (89%) had one 50% kernel-estimated core area, whereas 11% of the fish split their time between two core areas. Core areas were typically in the middle of the 90% kernel-estimated home range (58%), although several had core areas in upstream (26%) and downstream (16%) portions of the home range. Home-range size did not differ based on river, sex, or size and was highly variable among individuals. The median 95% kernel estimate was 1,085 m (range, 70– 69,090 m) for all fish. The median 50% kernel-estimated core area was 135 m (10–2,260 m). The median linear range was 3,510 m (150–50,400 m). Fish pairs with core areas in the same and neighboring pools had static joint space use values of up to 49% (area of intersection index), indicating substantial overlap and use of the same area. However, all fish pairs had low dynamic joint space use values (<0.07; coefficient of association), indicating that fish pairs were temporally segregated, rarely occurring in the same location at the same time.

  1. 3-D time-domain induced polarization tomography: a new approach based on a source current density formulation

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Revil, A.

    2018-04-01

    Induced polarization (IP) of porous rocks can be associated with a secondary source current density, which is proportional to both the intrinsic chargeability and the primary (applied) current density. This gives the possibility of reformulating the time domain induced polarization (TDIP) problem as a time-dependent self-potential-type problem. This new approach implies a change of strategy regarding data acquisition and inversion, allowing major time savings for both. For inverting TDIP data, we first retrieve the electrical resistivity distribution. Then, we use this electrical resistivity distribution to reconstruct the primary current density during the injection/retrieval of the (primary) current between the current electrodes A and B. The time-lapse secondary source current density distribution is determined given the primary source current density and a distribution of chargeability (forward modelling step). The inverse problem is linear between the secondary voltages (measured at all the electrodes) and the computed secondary source current density. A kernel matrix relating the secondary observed voltages data to the source current density model is computed once (using the electrical conductivity distribution), and then used throughout the inversion process. This recovered source current density model is in turn used to estimate the time-dependent chargeability (normalized voltages) in each cell of the domain of interest. Assuming a Cole-Cole model for simplicity, we can reconstruct the 3-D distributions of the relaxation time τ and the Cole-Cole exponent c by fitting the intrinsic chargeability decay curve to a Cole-Cole relaxation model for each cell. Two simple cases are studied in details to explain this new approach. In the first case, we estimate the Cole-Cole parameters as well as the source current density field from a synthetic TDIP data set. Our approach is successfully able to reveal the presence of the anomaly and to invert its Cole-Cole parameters. In the second case, we perform a laboratory sandbox experiment in which we mix a volume of burning coal and sand. The algorithm is able to localize the burning coal both in terms of electrical conductivity and chargeability.

  2. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  3. Structural and vibrational characteristics of a non-linear optical material 3-(4-nitrophenyl)-1-(pyridine-3-yl) prop-2-en-1-one probed by quantum chemical computation and spectroscopic techniques

    NASA Astrophysics Data System (ADS)

    Kumar, Ram; Karthick, T.; Tandon, Poonam; Agarwal, Parag; Menezes, Anthoni Praveen; Jayarama, A.

    2018-07-01

    Chalcone and its derivatives are well-known for their high non-linear optical behavior and charge transfer characteristics. The effectiveness of charge transfer via ethylenic group and increase in NLO response of the chalcone upon substitutions are of great interest. The present study focuses the structural, charge transfer and non-linear optical properties of a new chalcone derivative "3-(4-nitrophenyl)-1-(pyridine-3-yl) prop-2-en-1-one" (hereafter abbreviated as 4 NP3AP). To accomplish this task, we have incorporated the experimental FT-IR, FT-Raman and UV-vis spectroscopic studies along with quantum chemical calculations. The frequency assignments of peaks in IR and Raman have been done on the basis of potential energy distribution and the results were compared with the earlier reports on similar kind of molecules. For obtaining the electronic transition details of 4 NP3AP, UV-vis spectrum has been simulated by considering both gaseous and solvent phase using time-dependent density functional theory (TD-DFT). The HOMO-LUMO energy gap, most important factor to be considered for studying charge transfer properties of the molecule has been calculated. The electron density surface map corresponding to the net electrostatic point charges has been generated to obtain the electrophilic and nucleophilic sites. The charge transfer originating from the occupied (donor) and unoccupied (acceptor) molecular orbitals have been analyzed with the help of natural bond orbital theory. Moreover, the estimation of second-hyperpolarizability of the molecule confirms the non-linear optical behavior of the molecule.

  4. Observation of low magnetic field density peaks in helicon plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barada, Kshitish K.; Chattopadhyay, P. K.; Ghosh, J.

    2013-04-15

    Single density peak has been commonly observed in low magnetic field (<100 G) helicon discharges. In this paper, we report the observations of multiple density peaks in low magnetic field (<100 G) helicon discharges produced in the linear helicon plasma device [Barada et al., Rev. Sci. Instrum. 83, 063501 (2012)]. Experiments are carried out using argon gas with m = +1 right helical antenna operating at 13.56 MHz by varying the magnetic field from 0 G to 100 G. The plasma density varies with varying the magnetic field at constant input power and gas pressure and reaches to its peakmore » value at a magnetic field value of {approx}25 G. Another peak of smaller magnitude in density has been observed near 50 G. Measurement of amplitude and phase of the axial component of the wave using magnetic probes for two magnetic field values corresponding to the observed density peaks indicated the existence of radial modes. Measured parallel wave number together with the estimated perpendicular wave number suggests oblique mode propagation of helicon waves along the resonance cone boundary for these magnetic field values. Further, the observations of larger floating potential fluctuations measured with Langmuir probes at those magnetic field values indicate that near resonance cone boundary; these electrostatic fluctuations take energy from helicon wave and dump power to the plasma causing density peaks.« less

  5. Analytical potential-density pairs for bars

    NASA Astrophysics Data System (ADS)

    Vogt, D.; Letelier, P. S.

    2010-11-01

    An identity that relates multipolar solutions of the Einstein equations to Newtonian potentials of bars with linear densities proportional to Legendre polynomials is used to construct analytical potential-density pairs of infinitesimally thin bars with a given linear density profile. By means of a suitable transformation, softened bars that are free of singularities are also obtained. As an application we study the equilibrium points and stability for the motion of test particles in the gravitational field for three models of rotating bars.

  6. A new approach for the calculation of response spectral density of a linear stationary random multidegree of freedom system

    NASA Astrophysics Data System (ADS)

    Sharan, A. M.; Sankar, S.; Sankar, T. S.

    1982-08-01

    A new approach for the calculation of response spectral density for a linear stationary random multidegree of freedom system is presented. The method is based on modifying the stochastic dynamic equations of the system by using a set of auxiliary variables. The response spectral density matrix obtained by using this new approach contains the spectral densities and the cross-spectral densities of the system generalized displacements and velocities. The new method requires significantly less computation time as compared to the conventional method for calculating response spectral densities. Two numerical examples are presented to compare quantitatively the computation time.

  7. Background risk of breast cancer and the association between physical activity and mammographic density.

    PubMed

    Trinh, Thang; Eriksson, Mikael; Darabi, Hatef; Bonn, Stephanie E; Brand, Judith S; Cuzick, Jack; Czene, Kamila; Sjölander, Arvid; Bälter, Katarina; Hall, Per

    2015-04-02

    High physical activity has been shown to decrease the risk of breast cancer, potentially by a mechanism that also reduces mammographic density. We tested the hypothesis that the risk of developing breast cancer in the next 10 years according to the Tyrer-Cuzick prediction model influences the association between physical activity and mammographic density. We conducted a population-based cross-sectional study of 38,913 Swedish women aged 40-74 years. Physical activity was assessed using the validated web-questionnaire Active-Q and mammographic density was measured by the fully automated volumetric Volpara method. The 10-year risk of breast cancer was estimated using the Tyrer-Cuzick (TC) prediction model. Linear regression analyses were performed to assess the association between physical activity and volumetric mammographic density and the potential interaction with the TC breast cancer risk. Overall, high physical activity was associated with lower absolute dense volume. As compared to women with the lowest total activity level (<40 metabolic equivalent hours [MET-h] per day), women with the highest total activity level (≥50 MET-h/day) had an estimated 3.4 cm(3) (95% confidence interval, 2.3-4.7) lower absolute dense volume. The inverse association was seen for any type of physical activity among women with <3.0% TC 10-year risk, but only for total and vigorous activities among women with 3.0-4.9% TC risk, and only for vigorous activity among women with ≥5.0% TC risk. The association between total activity and absolute dense volume was modified by the TC breast cancer risk (P interaction = 0.05). As anticipated, high physical activity was also associated with lower non-dense volume. No consistent association was found between physical activity and percent dense volume. Our results suggest that physical activity may decrease breast cancer risk through reducing mammographic density, and that the physical activity needed to reduce mammographic density may depend on background risk of breast cancer.

  8. Effect of object location on the density measurement and Hounsfield conversion in a NewTom 3G cone beam computed tomography unit.

    PubMed

    Lagravère, M O; Carey, J; Ben-Zvi, M; Packota, G V; Major, P W

    2008-09-01

    The purpose of this study was to determine the effect of an object's location in a cone beam CT imaging chamber (CBCT-NewTom 3G) on its apparent density and to develop a linear conversion coefficient for Hounsfield units (HU) to material density (g cm(-3)) for the NewTom 3G Scanner. Three cylindrical models of materials with different densities were constructed and scanned at five different locations in a NewTom 3G Volume Scanner. The average HU value for each model at each location was obtained using two different types of software. Next, five cylinders of different known densities were scanned at the exact centre of a NewTom 3G Scanner. The collected data were analysed using the same two types of software to determine a standard linear relationship between density and HU for each type of software. There is no statistical significance of location of an object within the CBCT scanner on determination of its density. A linear relationship between the density of an object and the HU of a scan was rho = 0.001(HU)+1.19 with an R2 value of 0.893 (where density, rho, is measured in g cm(-3)). This equation is to be used on a range between 1.42 g cm(-3) and 0.4456 g cm(-3). A linear relationship can be used to determine the density of materials (in the density range of bone) from the HU values of a CBCT scan. This relationship is not affected by the object's location within the scanner itself.

  9. Metabolic syndrome and mammographic density in Mexican women

    PubMed Central

    Rice, Megan; Biessy, Carine; Lajous, Martin; Bertrand, Kimberly A.; Tamimi, Rulla M.; Torres-Mejía, Gabriela; López-Ridaura, Ruy; Romieu, Isabelle

    2014-01-01

    Background Metabolic syndrome has been associated with an increased risk of breast cancer; however little is known about the association between metabolic syndrome and percent mammographic density, a strong predictor of breast cancer. Methods We analyzed cross-sectional data from 789 premenopausal and 322 postmenopausal women in the Mexican Teacher's Cohort (ESMaestras). Metabolic syndrome was defined according to the harmonized definition. We measured percent density on mammograms using a computer-assisted thresholding method. Multivariable linear regression was used to estimate the association between density and metabolic syndrome, as well as its components by state (Jalisco, Veracruz) and menopausal status (premenopausal, postmenopausal). Results Among premenopausal women in Jalisco, women with metabolic syndrome had higher percent density compared to those without after adjusting for potential confounders including BMI (difference = 4.76, 95%CI: 1.72, 7.81). Among the metabolic syndrome components, only low high-density lipoprotein levels (<50mg/dl) were associated with significantly higher percent density among premenopausal women in Jalisco (difference=4.62, 95%CI: 1.73, 7.52). Metabolic syndrome was not associated with percent density among premenopausal women in Veracruz (difference=-2.91, 95% CI: -7.19, 1.38), nor among postmenopausal women in either state. Conclusion Metabolic syndrome was associated with higher percent density among premenopausal women in Jalisco, Mexico, but was not associated with percent density among premenopausal women in Veracruz, Mexico or among postmenopausal women in either Jalisco or Veracruz. These findings provide some support for a possible role of metabolic syndrome in mammographic density among premenopausal women; however results were inconsistent across states and require further confirmation in larger studies. PMID:23682074

  10. Interpreting the sub-linear Kennicutt-Schmidt relationship: the case for diffuse molecular gas

    NASA Astrophysics Data System (ADS)

    Shetty, Rahul; Clark, Paul C.; Klessen, Ralf S.

    2014-08-01

    Recent statistical analysis of two extragalactic observational surveys strongly indicate a sub-linear Kennicutt-Schmidt (KS) relationship between the star formation rate (ΣSFR) and molecular gas surface density (Σmol). Here, we consider the consequences of these results in the context of common assumptions, as well as observational support for a linear relationship between ΣSFR and the surface density of dense gas. If the CO traced gas depletion time (τ_dep^CO) is constant, and if CO only traces star-forming giant molecular clouds (GMCs), then the physical properties of each GMC must vary, such as the volume densities or star formation rates. Another possibility is that the conversion between CO luminosity and Σmol, the XCO factor, differs from cloud-to-cloud. A more straightforward explanation is that CO permeates the hierarchical interstellar medium, including the filaments and lower density regions within which GMCs are embedded. A number of independent observational results support this description, with the diffuse gas comprising at least 30 per cent of the total molecular content. The CO bright diffuse gas can explain the sub-linear KS relationship, and consequently leads to an increasing τ_dep^CO with Σmol. If ΣSFR linearly correlates with the dense gas surface density, a sub-linear KS relationship indicates that the fraction of diffuse gas fdiff grows with Σmol. In galaxies where Σmol falls towards the outer disc, this description suggests that fdiff also decreases radially.

  11. Estimating linear effects in ANOVA designs: the easy way.

    PubMed

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  12. Local Volume Hi Survey: the far-infrared radio correlation

    NASA Astrophysics Data System (ADS)

    Shao, Li; Koribalski, Bärbel S.; Wang, Jing; Ho, Luis C.; Staveley-Smith, Lister

    2018-06-01

    In this paper we measure the far-infrared (FIR) and radio flux densities of a sample of 82 local gas-rich galaxies, including 70 "dwarf" galaxies (M* < 109 M⊙), from the Local Volume HI Survey (LVHIS), which is close to volume limited. It is found that LVHIS galaxies hold a tight linear FIR-radio correlation (FRC) over four orders of magnitude (F_1.4GHz ∝ F_FIR^{1.00± 0.08}). However, for detected galaxies only, a trend of larger FIR-to-radio ratio with decreasing flux density is observed. We estimate the star formation rate by combining UV and mid-IR data using empirical calibration. It is confirmed that both FIR and radio emission are strongly connected with star formation but with significant non-linearity. Dwarf galaxies are found radiation deficient in both bands, when normalized by star formation rate. It urges a "conspiracy" to keep the FIR-to-radio ratio generally constant. By using partial correlation coefficient in Pearson definition, we identify the key galaxy properties associated with the FIR and radio deficiency. Some major factors, such as stellar mass surface density, will cancel out when taking the ratio between FIR and radio fluxes. The remaining factors, such as HI-to-stellar mass ratio and galaxy size, are expected to cancel each other due to the distribution of galaxies in the parameter space. Such cancellation is probably responsible for the "conspiracy" to keep the FRC alive.

  13. Brain Tissue Compartment Density Estimated Using Diffusion-Weighted MRI Yields Tissue Parameters Consistent With Histology

    PubMed Central

    Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi

    2015-01-01

    We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639

  14. Comparison and continuous estimates of fecal coliform and Escherichia coli bacteria in selected Kansas streams, May 1999 through April 2002

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Ziegler, Andrew C.

    2003-01-01

    The sanitary quality of water and its use as a public-water supply and for recreational activities, such as swimming, wading, boating, and fishing, can be evaluated on the basis of fecal coliform and Escherichia coli (E. coli) bacteria densities. This report describes the overall sanitary quality of surface water in selected Kansas streams, the relation between fecal coliform and E. coli, the relation between turbidity and bacteria densities, and how continuous bacteria estimates can be used to evaluate the water-quality conditions in selected Kansas streams. Samples for fecal coliform and E. coli were collected at 28 surface-water sites in Kansas. Of the 318 samples collected, 18 percent exceeded the current Kansas Department of Health and Environment (KDHE) secondary contact recreational, single-sample criterion for fecal coliform (2,000 colonies per 100 milliliters of water). Of the 219 samples collected during the recreation months (April 1 through October 31), 21 percent exceeded the current (2003) KDHE single-sample fecal coliform criterion for secondary contact rec-reation (2,000 colonies per 100 milliliters of water) and 36 percent exceeded the U.S. Environmental Protection Agency (USEPA) recommended single-sample primary contact recreational criterion for E. coli (576 colonies per 100 milliliters of water). Comparisons of fecal coliform and E. coli criteria indicated that more than one-half of the streams sampled could exceed USEPA recommended E. coli criteria more frequently than the current KDHE fecal coliform criteria. In addition, the ratios of E. coli to fecal coliform (EC/FC) were smallest for sites with slightly saline water (specific conductance greater than 1,000 microsiemens per centimeter at 25 degrees Celsius), indicating that E. coli may not be a good indicator of sanitary quality for those streams. Enterococci bacteria may provide a more accurate assessment of the potential for swimming-related illnesses in these streams. Ratios of EC/FC and linear regression models were developed for estimating E. coli densities on the basis of measured fecal coliform densities for six individual and six groups of surface-water sites. Regression models developed for the six individual surface-water sites and six groups of sites explain at least 89 percent of the variability in E. coli densities. The EC/FC ratios and regression models are site specific and make it possible to convert historic fecal coliform bacteria data to estimated E. coli densities for the selected sites. The EC/FC ratios can be used to estimate E. coli for any range of historical fecal coliform densities, and in some cases with less error than the regression models. The basin- and statewide regression models explained at least 93 percent of the variance and best represent the sites where a majority of the data used to develop the models were collected (Kansas and Little Arkansas Basins). Comparison of the current (2003) KDHE geometric-mean primary contact criterion for fecal coliform bacteria of 200 col/100 mL to the 2002 USEPA recommended geometric-mean criterion of 126 col/100 mL for E. coli results in an EC/FC ratio of 0.63. The geometric-mean EC/FC ratio for all sites except Rattlesnake Creek (site 21) is 0.77, indicating that considerably more than 63 percent of the fecal coliform is E. coli. This potentially could lead to more exceedances of the recommended E. coli criterion, where the water now meets the current (2003) 200-col/100 mL fecal coliform criterion. In this report, turbidity was found to be a reliable estimator of bacteria densities. Regression models are provided for estimating fecal coliform and E. coli bacteria densities using continuous turbidity measurements. Prediction intervals also are provided to show the uncertainty associated with using the regression models. Eighty percent of all measured sample densities and individual turbidity-based estimates from the regression models were in agreement as exceedi

  15. Cryogen spray cooling: Effects of droplet size and spray density on heat removal.

    PubMed

    Pikkula, B M; Torres, J H; Tunnell, J W; Anvari, B

    2001-01-01

    Cryogen spray cooling (CSC) is an effective method to reduce or eliminate non-specific injury to the epidermis during laser treatment of various dermatological disorders. In previous CSC investigations, fuel injectors have been used to deliver the cryogen onto the skin surface. The objective of this study was to examine cryogen atomization and heat removal characteristics of various cryogen delivery devices. Various cryogen delivery device types including fuel injectors, atomizers, and a device currently used in clinical settings were investigated. Cryogen mass was measured at the delivery device output orifice. Cryogen droplet size profiling for various cryogen delivery devices was estimated by optically imaging the droplets in flight. Heat removal for various cryogen delivery devices was estimated over a range of spraying distances by temperature measurements in an skin phantom used in conjunction with an inverse heat conduction model. A substantial range of mass outputs were measured for the cryogen delivery devices while heat removal varied by less than a factor of two. Droplet profiling demonstrated differences in droplet size and spray density. Results of this study show that variation in heat removal by different cryogen delivery devices is modest despite the relatively large difference in cryogen mass output and droplet size. A non-linear relationship between heat removal by various devices and droplet size and spray density was observed. Copyright 2001 Wiley-Liss, Inc.

  16. Theoretical prediction of the impact of Auger recombination on charge collection from an ion track

    NASA Technical Reports Server (NTRS)

    Edmonds, Larry D.

    1991-01-01

    A recombination mechanism that significantly reduces charge collection from very dense ion tracks in silicon devices was postulated by Zoutendyk et al. The theoretical analysis presented here concludes that Auger recombination is such a mechanism and is of marginal importance for higher density tracks produced by 270-MeV krypton, but of major importance for higher density tracks. The analysis shows that recombination loss is profoundly affected by track diffusion. As the track diffuses, the density and recombination rate decrease so fast that the linear density (number of electron-hole pairs per unit length) approaches a non-zero limiting value as t yields infinity. Furthermore, the linear density is very nearly equal to this limiting value in a few picoseconds or less. When Auger recombination accompanies charge transport processes that have much longer time scales, it can be simulated by assigning a reduced linear energy transfer to the ion.

  17. 360-degrees profilometry using strip-light projection coupled to Fourier phase-demodulation.

    PubMed

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-11

    360 degrees (360°) digitalization of three dimensional (3D) solids using a projected light-strip is a well-established technique in academic and commercial profilometers. These profilometers project a light-strip over the digitizing solid while the solid is rotated a full revolution or 360-degrees. Then, a computer program typically extracts the centroid of this light-strip, and by triangulation one obtains the shape of the solid. Here instead of using intensity-based light-strip centroid estimation, we propose to use Fourier phase-demodulation for 360° solid digitalization. The advantage of Fourier demodulation over strip-centroid estimation is that the accuracy of phase-demodulation linearly-increases with the fringe density, while in strip-light the centroid-estimation errors are independent. Here we proposed first to construct a carrier-frequency fringe-pattern by closely adding the individual light-strip images recorded while the solid is being rotated. Next, this high-density fringe-pattern is phase-demodulated using the standard Fourier technique. To test the feasibility of this Fourier demodulation approach, we have digitized two solids with increasing topographic complexity: a Rubik's cube and a plastic model of a human-skull. According to our results, phase demodulation based on the Fourier technique is less noisy than triangulation based on centroid light-strip estimation. Moreover, Fourier demodulation also provides the amplitude of the analytic signal which is a valuable information for the visualization of surface details.

  18. Effects of buffer size and shape on associations between the built environment and energy balance.

    PubMed

    James, Peter; Berrigan, David; Hart, Jaime E; Hipp, J Aaron; Hoehner, Christine M; Kerr, Jacqueline; Major, Jacqueline M; Oka, Masayoshi; Laden, Francine

    2014-05-01

    Uncertainty in the relevant spatial context may drive heterogeneity in findings on the built environment and energy balance. To estimate the effect of this uncertainty, we conducted a sensitivity analysis defining intersection and business densities and counts within different buffer sizes and shapes on associations with self-reported walking and body mass index. Linear regression results indicated that the scale and shape of buffers influenced study results and may partly explain the inconsistent findings in the built environment and energy balance literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Passive acoustic measurement of bedload grain size distribution using self-generated noise

    NASA Astrophysics Data System (ADS)

    Petrut, Teodor; Geay, Thomas; Gervaise, Cédric; Belleudy, Philippe; Zanker, Sebastien

    2018-01-01

    Monitoring sediment transport processes in rivers is of particular interest to engineers and scientists to assess the stability of rivers and hydraulic structures. Various methods for sediment transport process description were proposed using conventional or surrogate measurement techniques. This paper addresses the topic of the passive acoustic monitoring of bedload transport in rivers and especially the estimation of the bedload grain size distribution from self-generated noise. It discusses the feasibility of linking the acoustic signal spectrum shape to bedload grain sizes involved in elastic impacts with the river bed treated as a massive slab. Bedload grain size distribution is estimated by a regularized algebraic inversion scheme fed with the power spectrum density of river noise estimated from one hydrophone. The inversion methodology relies upon a physical model that predicts the acoustic field generated by the collision between rigid bodies. Here we proposed an analytic model of the acoustic energy spectrum generated by the impacts between a sphere and a slab. The proposed model computes the power spectral density of bedload noise using a linear system of analytic energy spectra weighted by the grain size distribution. The algebraic system of equations is then solved by least square optimization and solution regularization methods. The result of inversion leads directly to the estimation of the bedload grain size distribution. The inversion method was applied to real acoustic data from passive acoustics experiments realized on the Isère River, in France. The inversion of in situ measured spectra reveals good estimations of grain size distribution, fairly close to what was estimated by physical sampling instruments. These results illustrate the potential of the hydrophone technique to be used as a standalone method that could ensure high spatial and temporal resolution measurements for sediment transport in rivers.

  20. Density Estimation for New Solid and Liquid Explosives

    DTIC Science & Technology

    1977-02-17

    The group additivity approach was shown to be applicable to density estimation. The densities of approximately 180 explosives and related compounds... of very diverse compositions were estimated, and almost all the estimates were quite reasonable. Of the 168 compounds for which direct comparisons...could be made (see Table 6), 36.9% of the estimated densities were within 1% of the measured densities, 33.3% were within 1-2%, 11.9% were within 2-3

  1. A comparison of battery testing protocols: Those used by the U.S. advanced battery consortium and those used in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, David C.; Christophersen, Jon P.; Bennett, Taylor

    Two testing protocols, QC/T 743 and those used by the U.S. Advanced Battery Consortium (USABC), were compared using cells based on LiFePO4/graphite chemistry. Differences in the protocols directly affected the data and the performance decline mechanisms deduced from the data. A change in capacity fade mechanism from linear-with-time to t1/2 was observed when the power density measurement was included in the QC/T 743 testing. The rate of resistance increase was linear with time using both protocols. Overall, the testing protocols produced very similar data when the testing conditions and metrics used to define performance were similar. The choice of depthmore » of discharge and pulse width had a direct effect on estimated cell life. At greater percent depth of discharge (%DOD) and pulse width, the estimated life was shorter that at lower %DOD and shorter pulse width. This indicates that cells which were at the end of life based on the USABC protocol were not at end of life based on the QC/T 743 protocol by a large margin.« less

  2. Predicting stem borer density in maize using RapidEye data and generalized linear models

    NASA Astrophysics Data System (ADS)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  3. Study on magnetic circuit of moving magnet linear compressor

    NASA Astrophysics Data System (ADS)

    Xia, Ming; Chen, Xiaoping; Chen, Jun

    2015-05-01

    The moving magnet linear compressors are very popular in the tactical miniature stirling cryocoolers. The magnetic circuit of LFC3600 moving magnet linear compressor, manufactured by Kunming institute of Physics, was studied in this study. Three methods of the analysis theory, numerical calculation and experiment study were applied in the analysis process. The calculated formula of magnetic reluctance and magnetomotive force were given in theoretical analysis model. The magnetic flux density and magnetic flux line were analyzed in numerical analysis model. A testing method was designed to test the magnetic flux density of the linear compressor. When the piston of the motor was in the equilibrium position, the value of the magnetic flux density was at the maximum of 0.27T. The results were almost equal to the ones from numerical analysis.

  4. Charge density on thin straight wire, revisited

    NASA Astrophysics Data System (ADS)

    Jackson, J. D.

    2000-09-01

    The question of the equilibrium linear charge density on a charged straight conducting "wire" of finite length as its cross-sectional dimension becomes vanishingly small relative to the length is revisited in our didactic presentation. We first consider the wire as the limit of a prolate spheroidal conductor with semi-minor axis a and semi-major axis c when a/c<<1. We then treat an azimuthally symmetric straight conductor of length 2c and variable radius r(z) whose scale is defined by a parameter a. A procedure is developed to find the linear charge density λ(z) as an expansion in powers of 1/Λ, where Λ≡ln(4c2/a2), beginning with a uniform line charge density λ0. We show, for this rather general wire, that in the limit Λ>>1 the linear charge density becomes essentially uniform, but that the tiny nonuniformity (of order 1/Λ) is sufficient to produce a tangential electric field (of order Λ0) that cancels the zeroth-order field that naively seems to belie equilibrium. We specialize to a right circular cylinder and obtain the linear charge density explicitly, correct to order 1/Λ2 inclusive, and also the capacitance of a long isolated charged cylinder, a result anticipated in the published literature 37 years ago. The results for the cylinder are compared with published numerical computations. The second-order correction to the charge density is calculated numerically for a sampling of other shapes to show that the details of the distribution for finite 1/Λ vary with the shape, even though density becomes constant in the limit Λ→∞. We give a second method of finding the charge distribution on the cylinder, one that approximates the charge density by a finite polynomial in z2 and requires the solution of a coupled set of linear algebraic equations. Perhaps the most striking general observation is that the approach to uniformity as a/c→0 is extremely slow.

  5. Accelerated Microstructure Imaging via Convex Optimization (AMICO) from diffusion MRI data.

    PubMed

    Daducci, Alessandro; Canales-Rodríguez, Erick J; Zhang, Hui; Dyrby, Tim B; Alexander, Daniel C; Thiran, Jean-Philippe

    2015-01-15

    Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Structural study of gold clusters.

    PubMed

    Xiao, Li; Tollberg, Bethany; Hu, Xiankui; Wang, Lichang

    2006-03-21

    Density functional theory (DFT) calculations were carried out to study gold clusters of up to 55 atoms. Between the linear and zigzag monoatomic Au nanowires, the zigzag nanowires were found to be more stable. Furthermore, the linear Au nanowires of up to 2 nm are formed by slightly stretched Au dimers. These suggest that a substantial Peierls distortion exists in those structures. Planar geometries of Au clusters were found to be the global minima till the cluster size of 13. A quantitative correlation is provided between various properties of Au clusters and the structure and size. The relative stability of selected clusters was also estimated by the Sutton-Chen potential, and the result disagrees with that obtained from the DFT calculations. This suggests that a modification of the Sutton-Chen potential has to be made, such as obtaining new parameters, in order to use it to search the global minima for bigger Au clusters.

  7. Estimation of group means when adjusting for covariates in generalized linear models.

    PubMed

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Estimating monotonic rates from biological data using local linear regression.

    PubMed

    Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R

    2017-03-01

    Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.

  9. Resolvability of regional density structure and the road to direct density inversion - a principal-component approach to resolution analysis

    NASA Astrophysics Data System (ADS)

    Płonka, Agnieszka; Fichtner, Andreas

    2017-04-01

    Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convective motion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravity provide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling, making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assess if 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within the crust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we perform principal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish the extent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrained independently. We apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density, potentially allowing for as independent as possible density resolution. We find that surface (mosty Rayleigh) waves have significant sensitivity to density, and that the trade-off with velocity is negligible. We also show the preliminary results of the inversion.

  10. Nonlinear problems in data-assimilation : Can synchronization help?

    NASA Astrophysics Data System (ADS)

    Tribbia, J. J.; Duane, G. S.

    2009-12-01

    Over the past several years, operational weather centers have initiated ensemble prediction and assimilation techniques to estimate the error covariance of forecasts in the short and the medium range. The ensemble techniques used are based on linear methods. The theory This technique s been shown to be a useful indicator of skill in the linear range where forecast errors are small relative to climatological variance. While this advance has been impressive, there are still ad hoc aspects of its use in practice, like the need for covariance inflation which are troubling. Furthermore, to be of utility in the nonlinear range an ensemble assimilation and prediction method must be capable of giving probabilistic information for the situation where a probability density forecast becomes multi-modal. A prototypical, simplest example of such a situation is the planetary-wave regime transition where the pdf is bimodal. Our recent research show how the inconsistencies and extensions of linear methodology can be consistently treated using the paradigm of synchronization which views the problems of assimilation and forecasting as that of optimizing the forecast model state with respect to the future evolution of the atmosphere.

  11. Decentralization, stabilization, and estimation of large-scale linear systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Vukcevic, M. B.

    1976-01-01

    In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.

  12. Prediction of changes in important physical parameters during composting of separated animal slurry solid fractions.

    PubMed

    Chowdhury, Md Albarune; de Neergaard, Andreas; Jensen, Lars Stoumann

    2014-01-01

    Solid-liquid separation of animal slurry, with solid fractions used for composting, has gained interest recently. However, efficient composting of separated animal slurry solid fractions (SSFs) requires a better understanding of the process dynamics in terms of important physical parameters and their interacting physical relationships in the composting matrix. Here we monitored moisture content, bulk density, particle density and air-filled porosity (AFP) during composting of SSF collected from four commercially available solid-liquid separators. Composting was performed in laboratory-scale reactors for 30 days (d) under forced aeration and measurements were conducted on the solid samples at the beginning of composting and at 10-d intervals during composting. The results suggest that differences in initial physical properties of SSF influence the development of compost maximum temperatures (40-70 degreeC). Depending on SSF, total wet mass and volume losses (expressed as % of initial value) were up to 37% and 34%, respectively. After 30 d of composting, relative losses of total solids varied from 17.9% to 21.7% and of volatile solids (VS) from 21.3% to 27.5%, depending on SSF. VS losses in all composts showed different dynamics as described by the first-order kinetic equation. The estimated component particle density of 1441 kg m-3 for VS and 2625 kg m-3 for fixed solids can be used to improve estimates of AFP for SSF within the range tested. The linear relationship between wet bulk density and AFP reported by previous researchers held true for SSF.

  13. Cities, traffic, and CO2: A multidecadal assessment of trends, drivers, and scaling relationships

    PubMed Central

    Gately, Conor K.; Hutyra, Lucy R.; Sue Wing, Ian

    2015-01-01

    Emissions of CO2 from road vehicles were 1.57 billion metric tons in 2012, accounting for 28% of US fossil fuel CO2 emissions, but the spatial distributions of these emissions are highly uncertain. We develop a new emissions inventory, the Database of Road Transportation Emissions (DARTE), which estimates CO2 emitted by US road transport at a resolution of 1 km annually for 1980–2012. DARTE reveals that urban areas are responsible for 80% of on-road emissions growth since 1980 and for 63% of total 2012 emissions. We observe nonlinearities between CO2 emissions and population density at broad spatial/temporal scales, with total on-road CO2 increasing nonlinearly with population density, rapidly up to 1,650 persons per square kilometer and slowly thereafter. Per capita emissions decline as density rises, but at markedly varying rates depending on existing densities. We make use of DARTE’s bottom-up construction to highlight the biases associated with the common practice of using population as a linear proxy for disaggregating national- or state-scale emissions. Comparing DARTE with existing downscaled inventories, we find biases of 100% or more in the spatial distribution of urban and rural emissions, largely driven by mismatches between inventory downscaling proxies and the actual spatial patterns of vehicle activity at urban scales. Given cities’ dual importance as sources of CO2 and an emerging nexus of climate mitigation initiatives, high-resolution estimates such as DARTE are critical both for accurately quantifying surface carbon fluxes and for verifying the effectiveness of emissions mitigation efforts at urban scales. PMID:25847992

  14. Estimation of Δ R/ R values by benchmark study of the Mössbauer Isomer shifts for Ru, Os complexes using relativistic DFT calculations

    NASA Astrophysics Data System (ADS)

    Kaneko, Masashi; Yasuhara, Hiroki; Miyashita, Sunao; Nakashima, Satoru

    2017-11-01

    The present study applies all-electron relativistic DFT calculation with Douglas-Kroll-Hess (DKH) Hamiltonian to each ten sets of Ru and Os compounds. We perform the benchmark investigation of three density functionals (BP86, B3LYP and B2PLYP) using segmented all-electron relativistically contracted (SARC) basis set with the experimental Mössbauer isomer shifts for 99Ru and 189Os nuclides. Geometry optimizations at BP86 theory of level locate the structure in a local minimum. We calculate the contact density to the wavefunction obtained by a single point calculation. All functionals show the good linear correlation with experimental isomer shifts for both 99Ru and 189Os. Especially, B3LYP functional gives a stronger correlation compared to BP86 and B2PLYP functionals. The comparison of contact density between SARC and well-tempered basis set (WTBS) indicated that the numerical convergence of contact density cannot be obtained, but the reproducibility is less sensitive to the choice of basis set. We also estimate the values of Δ R/ R, which is an important nuclear constant, for 99Ru and 189Os nuclides by using the benchmark results. The sign of the calculated Δ R/ R values is consistent with the predicted data for 99Ru and 189Os. We obtain computationally the Δ R/ R values of 99Ru and 189Os (36.2 keV) as 2.35×10-4 and -0.20×10-4, respectively, at B3LYP level for SARC basis set.

  15. Monitoring diesel particulate matter and calculating diesel particulate densities using Grimm model 1.109 real-time aerosol monitors in underground mines.

    PubMed

    Kimbal, Kyle C; Pahler, Leon; Larson, Rodney; VanDerslice, Jim

    2012-01-01

    Currently, there is no Mine Safety and Health Administration (MSHA)-approved sampling method that provides real-time results for ambient concentrations of diesel particulates. This study investigated whether a commercially available aerosol spectrometer, the Grimm Portable Aerosol Spectrometer Model 1.109, could be used during underground mine operations to provide accurate real-time diesel particulate data relative to MSHA-approved cassette-based sampling methods. A subset was to estimate size-specific diesel particle densities to potentially improve the diesel particulate concentration estimates using the aerosol monitor. Concurrent sampling was conducted during underground metal mine operations using six duplicate diesel particulate cassettes, according to the MSHA-approved method, and two identical Grimm Model 1.109 instruments. Linear regression was used to develop adjustment factors relating the Grimm results to the average of the cassette results. Statistical models using the Grimm data produced predicted diesel particulate concentrations that highly correlated with the time-weighted average cassette results (R(2) = 0.86, 0.88). Size-specific diesel particulate densities were not constant over the range of particle diameters observed. The variance of the calculated diesel particulate densities by particle diameter size supports the current understanding that diesel emissions are a mixture of particulate aerosols and a complex host of gases and vapors not limited to elemental and organic carbon. Finally, diesel particulate concentrations measured by the Grimm Model 1.109 can be adjusted to provide sufficiently accurate real-time air monitoring data for an underground mining environment.

  16. Towards time-dependent current-density-functional theory in the non-linear regime

    NASA Astrophysics Data System (ADS)

    Escartín, J. M.; Vincendon, M.; Romaniello, P.; Dinh, P. M.; Reinhard, P.-G.; Suraud, E.

    2015-02-01

    Time-Dependent Density-Functional Theory (TDDFT) is a well-established theoretical approach to describe and understand irradiation processes in clusters and molecules. However, within the so-called adiabatic local density approximation (ALDA) to the exchange-correlation (xc) potential, TDDFT can show insufficiencies, particularly in violently dynamical processes. This is because within ALDA the xc potential is instantaneous and is a local functional of the density, which means that this approximation neglects memory effects and long-range effects. A way to go beyond ALDA is to use Time-Dependent Current-Density-Functional Theory (TDCDFT), in which the basic quantity is the current density rather than the density as in TDDFT. This has been shown to offer an adequate account of dissipation in the linear domain when the Vignale-Kohn (VK) functional is used. Here, we go beyond the linear regime and we explore this formulation in the time domain. In this case, the equations become very involved making the computation out of reach; we hence propose an approximation to the VK functional which allows us to calculate the dynamics in real time and at the same time to keep most of the physics described by the VK functional. We apply this formulation to the calculation of the time-dependent dipole moment of Ca, Mg and Na2. Our results show trends similar to what was previously observed in model systems or within linear response. In the non-linear domain, our results show that relaxation times do not decrease with increasing deposited excitation energy, which sets some limitations to the practical use of TDCDFT in such a domain of excitations.

  17. Towards time-dependent current-density-functional theory in the non-linear regime.

    PubMed

    Escartín, J M; Vincendon, M; Romaniello, P; Dinh, P M; Reinhard, P-G; Suraud, E

    2015-02-28

    Time-Dependent Density-Functional Theory (TDDFT) is a well-established theoretical approach to describe and understand irradiation processes in clusters and molecules. However, within the so-called adiabatic local density approximation (ALDA) to the exchange-correlation (xc) potential, TDDFT can show insufficiencies, particularly in violently dynamical processes. This is because within ALDA the xc potential is instantaneous and is a local functional of the density, which means that this approximation neglects memory effects and long-range effects. A way to go beyond ALDA is to use Time-Dependent Current-Density-Functional Theory (TDCDFT), in which the basic quantity is the current density rather than the density as in TDDFT. This has been shown to offer an adequate account of dissipation in the linear domain when the Vignale-Kohn (VK) functional is used. Here, we go beyond the linear regime and we explore this formulation in the time domain. In this case, the equations become very involved making the computation out of reach; we hence propose an approximation to the VK functional which allows us to calculate the dynamics in real time and at the same time to keep most of the physics described by the VK functional. We apply this formulation to the calculation of the time-dependent dipole moment of Ca, Mg and Na2. Our results show trends similar to what was previously observed in model systems or within linear response. In the non-linear domain, our results show that relaxation times do not decrease with increasing deposited excitation energy, which sets some limitations to the practical use of TDCDFT in such a domain of excitations.

  18. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  19. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    NASA Astrophysics Data System (ADS)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.

  20. CONSEQUENCES OF NON-LINEAR DENSITY EFFECTS ON BUOYANCY AND PLUME BEHAVIOR

    EPA Science Inventory

    Aquatic plumes, as turbulent streams, grow by entraining ambient water. Buoyant plumes rise and dense ones sink, but, non-linear kinetic effects can reverse the buoyant force in mid-phenomenon. The class of nascent-density plumes begin as buoyant, upwardly accelerating plumes tha...

  1. A fast estimator for the bispectrum and beyond - a practical method for measuring non-Gaussianity in 21-cm maps

    NASA Astrophysics Data System (ADS)

    Watkinson, Catherine A.; Majumdar, Suman; Pritchard, Jonathan R.; Mondal, Rajesh

    2017-12-01

    In this paper, we establish the accuracy and robustness of a fast estimator for the bispectrum - the 'FFT-bispectrum estimator'. The implementation of the estimator presented here offers speed and simplicity benefits over a direct-measurement approach. We also generalize the derivation so it may be easily be applied to any order polyspectra, such as the trispectrum, with the cost of only a handful of Fast-Fourier Transforms (FFTs). All lower order statistics can also be calculated simultaneously for little extra cost. To test the estimator, we make use of a non-linear density field, and for a more strongly non-Gaussian test case, we use a toy-model of reionization in which ionized bubbles at a given redshift are all of equal size and are randomly distributed. Our tests find that the FFT-estimator remains accurate over a wide range of k, and so should be extremely useful for analysis of 21-cm observations. The speed of the FFT-bispectrum estimator makes it suitable for sampling applications, such as Bayesian inference. The algorithm we describe should prove valuable in the analysis of simulations and observations, and whilst, we apply it within the field of cosmology, this estimator is useful in any field that deals with non-Gaussian data.

  2. The behaviour of platelets in natural diamonds and the development of a new mantle thermometer

    NASA Astrophysics Data System (ADS)

    Speich, L.; Kohn, S. C.; Bulanova, G. P.; Smith, C. B.

    2018-05-01

    Platelets are one of the most common defects occurring in natural diamonds but their behaviour has not previously been well understood. Recent technical advances, and a much improved understanding of the correct interpretation of the main infrared (IR) feature associated with platelets (Speich et al. 2017), facilitated a systematic study of platelets in 40 natural diamonds. Three different types of platelet behaviour were identified here. Regular diamonds show linear correlations between both B-centre concentrations and platelet density and also between platelet size and platelet density. Irregular diamonds display reduced platelet density due to platelet breakdown, anomalously large or small platelets and a larger platelet size distribution. These features are indicative of high mantle storage temperatures. Finally, a previously unreported category of subregular diamonds is defined. These diamonds experienced low mantle residence temperatures and show smaller than expected platelets. Combining the systematic variation in platelet density with temperatures of mantle storage, determined by nitrogen aggregation, we can demonstrate that platelet degradation proceeds at a predictable rate. Thus, in platelet-bearing diamonds where N aggregation is complete, an estimate of annealing temperature can now be made for the first time.

  3. Pairs of galaxies in low density regions of a combined redshift catalog

    NASA Technical Reports Server (NTRS)

    Charlton, Jane C.; Salpeter, Edwin E.

    1990-01-01

    The distributions of projected separations and radial velocity differences of pairs of galaxies in the CfA and Southern Sky Redshift Survey (SSRS) redshift catalogs are examined. The authors focus on pairs that fall in low density environments rather than in clusters or large groups. The projected separation distribution is nearly flat, while uncorrelated galaxies would have given one linearly rising with r sub p. There is no break in this curve even below 50 kpc, the minimum halo size consistent with measured galaxy rotation curves. The significant number of pairs at small separations is inconsistent with the N-body result that galaxies with overlapping halos will rapidly merge, unless there are significant amounts of matter distributed out to a few hundred kpc of the galaxies. This dark matter may either be in distinct halos or more loosely distributed. Large halos would allow pairs at initially large separations to head toward merger, replenishing the distribution at small separations. In the context of this model, the authors estimate that roughly 10 to 25 percent of these low density galaxies are the product of a merger, compared with the elliptical/SO fraction of 18 percent, observed in low density regions of the sample.

  4. Breast density estimation from high spectral and spatial resolution MRI

    PubMed Central

    Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.

    2016-01-01

    Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (p<0.0001) was obtained between left and right breast density estimations. An interclass correlation coefficient of 0.99 (p<0.0001) indicated high reliability for the inter-user variability of the HiSS-based breast density estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590

  5. Pore cross-section area on predicting elastic properties of trabecular bovine bone for human implants.

    PubMed

    Maciel, Alfredo; Presbítero, Gerardo; Piña, Cristina; del Pilar Gutiérrez, María; Guzmán, José; Munguía, Nadia

    2015-01-01

    A clear understanding of the dependence of mechanical properties of bone remains a task not fully achieved. In order to estimate the mechanical properties in bones for implants, pore cross-section area, calcium content, and apparent density were measured in trabecular bone samples for human implants. Samples of fresh and defatted bone tissue, extracted from one year old bovines, were cut in longitudinal and transversal orientation of the trabeculae. Pore cross-section area was measured with an image analyzer. Compression tests were conducted into rectangular prisms. Elastic modulus presents a linear tendency as a function of pore cross-section area, calcium content and apparent density regardless of the trabecular orientation. The best variable to estimate elastic modulus of trabecular bone for implants was pore cross-section area, and affirmations to consider Nukbone process appropriated for marrow extraction in trabecular bone for implantation purposes are proposed, according to bone mechanical properties. Considering stress-strain curves, defatted bone is stiffer than fresh bone. Number of pores against pore cross-section area present an exponential decay, consistent for all the samples. These graphs also are useful to predict elastic properties of trabecular samples of young bovines for implants.

  6. Humpback whale-generated ambient noise levels provide insight into singers' spatial densities.

    PubMed

    Seger, Kerri D; Thode, Aaron M; Urbán-R, Jorge; Martínez-Loustalot, Pamela; Jiménez-López, M Esther; López-Arzate, Diana

    2016-09-01

    Baleen whale vocal activity can be the dominant underwater ambient noise source for certain locations and seasons. Previous wind-driven ambient-noise formulations have been adjusted to model ambient noise levels generated by random distributions of singing humpback whales in ocean waveguides and have been combined to a single model. This theoretical model predicts that changes in ambient noise levels with respect to fractional changes in singer population (defined as the noise "sensitivity") are relatively unaffected by the source level distributions and song spectra of individual humpback whales (Megaptera novaeangliae). However, the noise "sensitivity" does depend on frequency and on how the singers' spatial density changes with population size. The theoretical model was tested by comparing visual line transect surveys with bottom-mounted passive acoustic data collected during the 2013 and 2014 humpback whale breeding seasons off Los Cabos, Mexico. A generalized linear model (GLM) estimated the noise "sensitivity" across multiple frequency bands. Comparing the GLM estimates with the theoretical predictions suggests that humpback whales tend to maintain relatively constant spacing between one another while singing, but that individual singers either slightly increase their source levels or song duration, or cluster more tightly as the singing population increases.

  7. The Effect of Petrographic Characteristics on Engineering Properties of Conglomerates from Famenin Region, Northeast of Hamedan, Iran

    NASA Astrophysics Data System (ADS)

    Khanlari, G. R.; Heidari, M.; Noori, M.; Momeni, A.

    2016-07-01

    To assess relationship between engineering characteristics and petrographic features, conglomerates samples related to Qom formation from Famenin region in northeast of Hamedan province were studied. Samples were tested in laboratory to determine the uniaxial compressive strength, point load strength index, modulus of elasticity, porosity, dry and saturation densities. For determining petrographic features, textural and mineralogical parameters, thin sections of the samples were prepared and studied. The results show that the effect of textural characteristics on the engineering properties of conglomerates supposed to be more important than mineralogical composition. It also was concluded that the packing proximity, packing density, grain shape and mean grain size, cement and matrix frequency are as textural features that have a significant effect on the physical and mechanical properties of the studied conglomerates. In this study, predictive statistical relationships were developed to estimate the physical and mechanical properties of the rocks based on the results of petrographic features. Furthermore, multivariate linear regression was used in four different steps comprising various combinations of petrographical characteristics for each engineering parameters. Finally, the best equations with specific arrangement were suggested to estimate engineering properties of the Qom formation conglomerates.

  8. Demolition waste generation for development of a regional management chain model.

    PubMed

    Bernardo, Miguel; Gomes, Marta Castilho; de Brito, Jorge

    2016-03-01

    Even though construction and demolition waste (CDW) is the bulkiest waste stream, its estimation and composition in specific regions still faces major difficulties. Therefore new methods are required especially when it comes to make predictions limited to small areas, such as counties. This paper proposes one such method, which makes use of data collected from real demolition works and statistical information on the geographical area under study. Based on a correlation analysis between the demolition waste estimates and indicators such as population density, buildings ageing index, buildings density and land occupation type, relationships are established that can be used to determine demolition waste outputs in a given area. The derived models are presented and explained. This methodology is independent from the specific region with which it is exemplified (the Lisbon Metropolitan Area) and can therefore be applied to any region of the world, from the country to the county level. Generation of demolition waste data at the county level is the basis of the design of a systemic model for CDW management in a region. Future developments proposed include a mixed-integer linear programming formulation of such recycling network. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.

    PubMed

    Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon

    2017-05-01

    Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.

  10. Estimating linear temporal trends from aggregated environmental monitoring data

    USGS Publications Warehouse

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  11. Z-Scan Analysis: a New Method to Determine the Oxidative State of Low-Density Lipoprotein and Its Association with Multiple Cardiometabolic Biomarkers

    NASA Astrophysics Data System (ADS)

    de Freitas, Maria Camila Pruper; Figueiredo Neto, Antonio Martins; Giampaoli, Viviane; da Conceição Quintaneiro Aubin, Elisete; de Araújo Lima Barbosa, Milena Maria; Damasceno, Nágila Raquel Teixeira

    2016-04-01

    The great atherogenic potential of oxidized low-density lipoprotein has been widely described in the literature. The objective of this study was to investigate whether the state of oxidized low-density lipoprotein in human plasma measured by the Z-scan technique has an association with different cardiometabolic biomarkers. Total cholesterol, high-density lipoprotein cholesterol, triacylglycerols, apolipoprotein A-I and apolipoprotein B, paraoxonase-1, and glucose were analyzed using standard commercial kits, and low-density lipoprotein cholesterol was estimated using the Friedewald equation. A sandwich enzyme-linked immunosorbent assay was used to detect electronegative low-density lipoprotein. Low-density lipoprotein and high-density lipoprotein sizes were determined by Lipoprint® system. The Z-scan technique was used to measure the non-linear optical response of low-density lipoprotein solution. Principal component analysis and correlations were used respectively to resize the data from the sample and test association between the θ parameter, measured with the Z-scan technique, and the principal component. A total of 63 individuals, from both sexes, with mean age 52 years (±11), being overweight and having high levels of total cholesterol and low levels of high-density lipoprotein cholesterol, were enrolled in this study. A positive correlation between the θ parameter and more anti-atherogenic pattern for cardiometabolic biomarkers together with a negative correlation for an atherogenic pattern was found. Regarding the parameters related with an atherogenic low-density lipoprotein profile, the θ parameter was negatively correlated with a more atherogenic pattern. By using Z-scan measurements, we were able to find an association between oxidized low-density lipoprotein state and multiple cardiometabolic biomarkers in samples from individuals with different cardiovascular risk factors.

  12. Calibration and LOD/LOQ estimation of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs expressed in E. coli using a four-parameter logistic model.

    PubMed

    Lee, K R; Dipaolo, B; Ji, X

    2000-06-01

    Calibration is the process of fitting a model based on reference data points (x, y), then using the model to estimate an unknown x based on a new measured response, y. In DNA assay, x is the concentration, and y is the measured signal volume. A four-parameter logistic model was used frequently for calibration of immunoassay when the response is optical density for enzyme-linked immunosorbent assay (ELISA) or adjusted radioactivity count for radioimmunoassay (RIA). Here, it is shown that the same model or a linearized version of the curve are equally useful for the calibration of a chemiluminescent hybridization assay for residual DNA in recombinant protein drugs and calculation of performance measures of the assay.

  13. Correction of scatter in megavoltage cone-beam CT

    NASA Astrophysics Data System (ADS)

    Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.

    2001-03-01

    The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.

  14. Population ecology of the mallard: II. Breeding habitat conditions, size of the breeding populations, and production indices

    USGS Publications Warehouse

    Pospahala, Richard S.; Anderson, David R.; Henny, Charles J.

    1974-01-01

    This report, the second in a series on a comprehensive analysis of mallard population data, provides information on mallard breeding habitat, the size and distribution of breeding populations, and indices to production. The information in this report is primarily the result of large-scale aerial surveys conducted during May and July, 1955-73. The history of the conflict in resource utilization between agriculturalists and wildlife conservation interests in the primary waterfowl breeding grounds is reviewed. The numbers of ponds present during the breeding season and the midsummer period and the effects of precipitation and temperature on the number of ponds present are analyzed in detail. No significant cycles in precipitation were detected and it appears that precipitation is primarily influenced by substantial seasonal and random components. Annual estimates (1955-73) of the number of mallards in surveyed and unsurveyed breeding areas provided estimates of the size and geographic distribution of breeding mallards in North America. The estimated size of the mallard breeding population in North America has ranged from a high of 14.4 million in 1958 to a low of 7.1 million in 1965. Generally, the mallard breeding population began to decline after the 1958 peak until 1962, and remained below 10 million birds until 1970. The decline and subsequent low level of the mallard population between 1959 and 1969 .generally coincided with a period of poor habitat conditions on the major breeding grounds. The density of mallards was highest in the Prairie-Parkland Area with an average of nearly 19.2 birds per square mile. The proportion of the continental mallard breeding population in the Prairie-Parkland Area ranged from 30% in 1962 to a high of 600/0 in 1956. The geographic distribution of breeding mallards throughout North America was significantly related to the number of May ponds in the Prairie-Parkland Area. Estimates of midsummer habitat conditions and indices to production from the July Production Survey were studied in detail. Several indices relating to production showed marked declines from west to east in the Prairie-Parkland Area, these are: (1) density of breeding mallards (per square mile and per May pond), (2) brood density (per square mile and per July pond), (3) average brood size (all species combined), and (4) brood survival from class II to class III. An index to late nesting and renesting efforts was highest during years when midsummer water conditions were good. Production rates of many ducks breeding in North America appear to be regulated by both density-dependent and density-independent factors. Spacing of birds in the Prairie-Parkland Area appeared to be a key factor in the density-dependent regulation of the population. The spacing mechanism, in conjunction with habitat conditions, influenced some birds to overfly the primary breeding grounds into less favorable habitats to the north and northwest where the production rate may be suppressed. The production rate of waterfowl in the Prairie Parkland Area seems to be independent of density (after emigration has taken place) because the production index appears to be a linear function of the number of breeding birds in the area. Similarly, the production rate of waterfowl in northern Saskatchewan and northern Manitoba appeared to be independent of density. Production indices in these northern areas appear to be a linear function of the size of the breeding population. Thus, the density and distribution of breeding ducks is probably regulated through a spacing mechanism that is at least partially dependent on measurable environmental factors. The result is a density-dependent process operating to ultimately effect the production and production rate of breeding ducks on a continent-wide basis. Continental production, and therefore the size of the fall population, is probably partially regulated by the number of birds that are distributed north and northwest into environments less favorable for successful reproduction. Thus, spacing of the birds in the Prairie-Parkland Area and the movement of a fraction of the birds out of the prime breeding areas may be key factors in the density-dependent regulation of the total mallard population.

  15. Linear models: permutation methods

    USGS Publications Warehouse

    Cade, B.S.; Everitt, B.S.; Howell, D.C.

    2005-01-01

    Permutation tests (see Permutation Based Inference) for the linear model have applications in behavioral studies when traditional parametric assumptions about the error term in a linear model are not tenable. Improved validity of Type I error rates can be achieved with properly constructed permutation tests. Perhaps more importantly, increased statistical power, improved robustness to effects of outliers, and detection of alternative distributional differences can be achieved by coupling permutation inference with alternative linear model estimators. For example, it is well-known that estimates of the mean in linear model are extremely sensitive to even a single outlying value of the dependent variable compared to estimates of the median [7, 19]. Traditionally, linear modeling focused on estimating changes in the center of distributions (means or medians). However, quantile regression allows distributional changes to be estimated in all or any selected part of a distribution or responses, providing a more complete statistical picture that has relevance to many biological questions [6]...

  16. Regional abundance of on-premise outlets and drinking patterns among Swiss young men: district level analyses and geographic adjustments.

    PubMed

    Astudillo, Mariana; Kuendig, Hervé; Centeno-Gil, Adriana; Wicki, Matthias; Gmel, Gerhard

    2014-09-01

    This study investigated the associations of alcohol outlet density with specific alcohol outcomes (consumption and consequences) among young men in Switzerland and assessed the possible geographically related variations. Alcohol consumption and drinking consequences were measured in a 2010-2011 study assessing substance use risk factors (Cohort Study on Substance Use Risk Factors) among 5519 young Swiss men. Outlet density was based on the number of on- and off-premise outlets in the district of residence. Linear regression models were run separately for drinking level, heavy episodic drinking (HED) and drinking consequences. Geographically weighted regression models were estimated when variations were recorded at the district level. No consistent association was found between outlet density and drinking consequences. A positive association between drinking level and HED with on-premise outlet density was found. Geographically weighted regressions were run for drinking level and HED. The predicted values for HED were higher in the southwest part of Switzerland (French-speaking part). Among Swiss young men, the density of outlets and, in particular, the abundance of bars, clubs and other on-premise outlets was associated with drinking level and HED, even when drinking consequences were not significantly affected. These findings support the idea that outlet density needs to be considered when developing and implementing regional-based prevention initiatives. © 2014 Australasian Professional Society on Alcohol and other Drugs.

  17. Dynamic characterization of external and internal mass transport in heterotrophic biofilms from microsensors measurements.

    PubMed

    Guimerà, Xavier; Dorado, Antonio David; Bonsfills, Anna; Gabriel, Gemma; Gabriel, David; Gamisans, Xavier

    2016-10-01

    Knowledge of mass transport mechanisms in biofilm-based technologies such as biofilters is essential to improve bioreactors performance by preventing mass transport limitation. External and internal mass transport in biofilms was characterized in heterotrophic biofilms grown on a flat plate bioreactor. Mass transport resistance through the liquid-biofilm interphase and diffusion within biofilms were quantified by in situ measurements using microsensors with a high spatial resolution (<50 μm). Experimental conditions were selected using a mathematical procedure based on the Fisher Information Matrix to increase the reliability of experimental data and minimize confidence intervals of estimated mass transport coefficients. The sensitivity of external and internal mass transport resistances to flow conditions within the range of typical fluid velocities over biofilms (Reynolds numbers between 0.5 and 7) was assessed. Estimated external mass transfer coefficients at different liquid phase flow velocities showed discrepancies with studies considering laminar conditions in the diffusive boundary layer near the liquid-biofilm interphase. The correlation of effective diffusivity with flow velocities showed that the heterogeneous structure of biofilms defines the transport mechanisms inside biofilms. Internal mass transport was driven by diffusion through cell clusters and aggregates at Re below 2.8. Conversely, mass transport was driven by advection within pores, voids and water channels at Re above 5.6. Between both flow velocities, mass transport occurred by a combination of advection and diffusion. Effective diffusivities estimated at different biofilm densities showed a linear increase of mass transport resistance due to a porosity decrease up to biofilm densities of 50 g VSS·L(-1). Mass transport was strongly limited at higher biofilm densities. Internal mass transport results were used to propose an empirical correlation to assess the effective diffusivity within biofilms considering the influence of hydrodynamics and biofilm density. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A quantum relaxation-time approximation for finite fermion systems

    NASA Astrophysics Data System (ADS)

    Reinhard, P.-G.; Suraud, E.

    2015-03-01

    We propose a relaxation time approximation for the description of the dynamics of strongly excited fermion systems. Our approach is based on time-dependent density functional theory at the level of the local density approximation. This mean-field picture is augmented by collisional correlations handled in relaxation time approximation which is inspired from the corresponding semi-classical picture. The method involves the estimate of microscopic relaxation rates/times which is presently taken from the well established semi-classical experience. The relaxation time approximation implies evaluation of the instantaneous equilibrium state towards which the dynamical state is progressively driven at the pace of the microscopic relaxation time. As test case, we consider Na clusters of various sizes excited either by a swift ion projectile or by a short and intense laser pulse, driven in various dynamical regimes ranging from linear to strongly non-linear reactions. We observe a strong effect of dissipation on sensitive observables such as net ionization and angular distributions of emitted electrons. The effect is especially large for moderate excitations where typical relaxation/dissipation time scales efficiently compete with ionization for dissipating the available excitation energy. Technical details on the actual procedure to implement a working recipe of such a quantum relaxation approximation are given in appendices for completeness.

  19. Regional co-location pattern scoping on a street network considering distance decay effects of spatial interaction

    PubMed Central

    Yu, Wenhao

    2017-01-01

    Regional co-location scoping intends to identify local regions where spatial features of interest are frequently located together. Most of the previous researches in this domain are conducted on a global scale and they assume that spatial objects are embedded in a 2-D space, but the movement in urban space is actually constrained by the street network. In this paper we refine the scope of co-location patterns to 1-D paths consisting of nodes and segments. Furthermore, since the relations between spatial events are usually inversely proportional to their separation distance, the proposed method introduces the “Distance Decay Effects” to improve the result. Specifically, our approach first subdivides the street edges into continuous small linear segments. Then a value representing the local distribution intensity of events is estimated for each linear segment using the distance-decay function. Each kind of geographic feature can lead to a tessellated network with density attribute, and the generated multiple networks for the pattern of interest will be finally combined into a composite network by calculating the co-location prevalence measure values, which are based on the density variation between different features. Our experiments verify that the proposed approach is effective in urban analysis. PMID:28763496

  20. Laboratory-Measured and Property-Transfer Modeled Saturated Hydraulic Conductivity of Snake River Plain Aquifer Sediments at the Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Perkins, Kim S.

    2008-01-01

    Sediments are believed to comprise as much as 50 percent of the Snake River Plain aquifer thickness in some locations within the Idaho National Laboratory. However, the hydraulic properties of these deep sediments have not been well characterized and they are not represented explicitly in the current conceptual model of subregional scale ground-water flow. The purpose of this study is to evaluate the nature of the sedimentary material within the aquifer and to test the applicability of a site-specific property-transfer model developed for the sedimentary interbeds of the unsaturated zone. Saturated hydraulic conductivity (Ksat) was measured for 10 core samples from sedimentary interbeds within the Snake River Plain aquifer and also estimated using the property-transfer model. The property-transfer model for predicting Ksat was previously developed using a multiple linear-regression technique with bulk physical-property measurements (bulk density [pbulk], the median particle diameter, and the uniformity coefficient) as the explanatory variables. The model systematically underestimates Ksat,typically by about a factor of 10, which likely is due to higher bulk-density values for the aquifer samples compared to the samples from the unsaturated zone upon which the model was developed. Linear relations between the logarithm of Ksat and pbulk also were explored for comparison.

  1. THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)

    EPA Science Inventory

    This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...

  2. International comparisons of the associations between objective measures of the built environment and transport-related walking and cycling: IPEN Adult Study.

    PubMed

    Christiansen, Lars B; Cerin, Ester; Badland, Hannah; Kerr, Jacqueline; Davey, Rachel; Troelsen, Jens; van Dyck, Delfien; Mitáš, Josef; Schofield, Grant; Sugiyama, Takemi; Salvo, Deborah; Sarmiento, Olga L; Reis, Rodrigo; Adams, Marc; Frank, Larry; Sallis, James F

    2016-12-01

    Mounting evidence documents the importance of urban form for active travel, but international studies could strengthen the evidence. The aim of the study was to document the strength, shape, and generalizability of relations of objectively measured built environment variables with transport-related walking and cycling. This cross-sectional study maximized variation of environments and demographics by including multiple countries and by selecting adult participants living in neighborhoods based on higher and lower classifications of objectively measured walkability and socioeconomic status. Analyses were conducted on 12,181 adults aged 18-66 years, drawn from 14 cities across 10 countries worldwide. Frequency of transport-related walking and cycling over the last seven days was assessed by questionnaire and four objectively measured built environment variables were calculated. Associations of built environment variables with transport-related walking and cycling variables were estimated using generalized additive mixed models, and were tested for curvilinearity and study site moderation. We found positive associations of walking for transport with all the environmental attributes, but also found that the relationships was only linear for land use mix, but not for residential density, intersection density, and the number of parks. Our findings suggest that there may be optimum values in these attributes, beyond which higher densities or number of parks could have minor or even negative impact. Cycling for transport was associated linearly with residential density, intersection density (only for any cycling), and land use mix, but not with the number of parks. Across 14 diverse cities and countries, living in more densely populated areas, having a well-connected street network, more diverse land uses, and having more parks were positively associated with transport-related walking and/or cycling. Except for land-use-mix, all built environment variables had curvilinear relationships with walking, with a plateau in the relationship at higher levels of the scales.

  3. International comparisons of the associations between objective measures of the built environment and transport-related walking and cycling: IPEN Adult Study

    PubMed Central

    Christiansen, Lars B.; Cerin, Ester; Badland, Hannah; Kerr, Jacqueline; Davey, Rachel; Troelsen, Jens; van Dyck, Delfien; Mitáš, Josef; Schofield, Grant; Sugiyama, Takemi; Salvo, Deborah; Sarmiento, Olga L.; Reis, Rodrigo; Adams, Marc; Frank, Larry; Sallis, James F.

    2016-01-01

    Introduction Mounting evidence documents the importance of urban form for active travel, but international studies could strengthen the evidence. The aim of the study was to document the strength, shape, and generalizability of relations of objectively measured built environment variables with transport-related walking and cycling. Methods This cross-sectional study maximized variation of environments and demographics by including multiple countries and by selecting adult participants living in neighborhoods based on higher and lower classifications of objectively measured walkability and socioeconomic status. Analyses were conducted on 12,181 adults aged 18–66 years, drawn from 14 cities across 10 countries worldwide. Frequency of transport-related walking and cycling over the last seven days was assessed by questionnaire and four objectively measured built environment variables were calculated. Associations of built environment variables with transport-related walking and cycling variables were estimated using generalized additive mixed models, and were tested for curvilinearity and study site moderation. Results We found positive associations of walking for transport with all the environmental attributes, but also found that the relationships was only linear for land use mix, but not for residential density, intersection density, and the number of parks. Our findings suggest that there may be optimum values in these attributes, beyond which higher densities or number of parks could have minor or even negative impact. Cycling for transport was associated linearly with residential density, intersection density (only for any cycling), and land use mix, but not with the number of parks. Conclusion Across 14 diverse cities and countries, living in more densely populated areas, having a well-connected street network, more diverse land uses, and having more parks were positively associated with transport-related walking and/or cycling. Except for land-use-mix, all built environment variables had curvilinear relationships with walking, with a plateau in the relationship at higher levels of the scales. PMID:28111613

  4. Ant-inspired density estimation via random walks.

    PubMed

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  5. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  6. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  7. Charge-regularized swelling kinetics of polyelectrolyte gels: Elasticity and diffusion

    NASA Astrophysics Data System (ADS)

    Sen, Swati; Kundagrami, Arindam

    2017-11-01

    We apply a recently developed method [S. Sen and A. Kundagrami, J. Chem. Phys. 143, 224904 (2015)], using a phenomenological expression of osmotic stress, as a function of polymer and charge densities, hydrophobicity, and network elasticity for the swelling of spherical polyelectrolyte (PE) gels with fixed and variable charges in a salt-free solvent. This expression of stress is used in the equation of motion of swelling kinetics of spherical PE gels to numerically calculate the spatial profiles for the polymer and free ion densities at different time steps and the time evolution of the size of the gel. We compare the profiles of the same variables obtained from the classical linear theory of elasticity and quantitatively estimate the bulk modulus of the PE gel. Further, we obtain an analytical expression of the elastic modulus from the linearized expression of stress (in the small deformation limit). We find that the estimated bulk modulus of the PE gel decreases with the increase of its effective charge for a fixed degree of deformation during swelling. Finally, we match the gel-front locations with the experimental data, taken from the measurements of charged reversible addition-fragmentation chain transfer gels to show an increase in gel-size with charge and also match the same for PNIPAM (uncharged) and imidazolium-based (charged) minigels, which specifically confirms the decrease of the gel modulus value with the increase of the charge. The agreement between experimental and theoretical results confirms general diffusive behaviour for swelling of PE gels with a decreasing bulk modulus with increasing degree of ionization (charge). The new formalism captures large deformations as well with a significant variation of charge content of the gel. It is found that PE gels with large deformation but same initial size swell faster with a higher charge.

  8. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.

  9. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Astronomy Data Centre, Canadian

    2014-01-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors, and the local outlier factor. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex datasets that wishes to extract the full scientific value from its data.

  10. Investigation of the dynamics of ephemeral gully erosion on arable land of the forest-steppe and steppe zone of the East of the Russian Plain from remote sensing data

    NASA Astrophysics Data System (ADS)

    Platoncheva, E. V.

    2018-01-01

    Spatio-temporal estimation of the erosion of arable soils is still an urgent task, in spite of the numerous methods of such assessments. Development of information technologies, the emergence of high and ultra-high resolution images allows reliable identification of linear forms of erosion to determine its dynamics on arable land. The study drew attention to the dynamics of the most active erosion unit - an ephemeral gully. The estimation of the dynamics was carried out on the basis of different space images for the maximum possible period (from 1986 to 2016). The cartographic method was used as the main research method. Identification of a belt of ephemeral gully erosion based on materials of multi-zone space surveys and GIS-technology of their processing was carried out. In the course of work with satellite imagery and subsequent verification of the received data on the ground, the main signs of deciphering the ephemeral gully network were determined. A methodology for geoinformation mapping of the dynamics of ephemeral gully erosion belt was developed and a system of indicators quantitatively characterizing its development on arable slopes was proposed. The evaluation of the current ephemeral gully network based on the interpretation of space images includes the definition of such indicators of ephemeral gully erosion as the density of the ephemeral gully net, the density of the ephemeral gullies, the area and linear dynamics of the ephemeral gully network. Preliminary results of the assessment of the dynamics of the belt erosion showed an increase in all quantitative indicators of ephemeral gully erosion for the observed period.

  11. Defining a Contemporary Ischemic Heart Disease Genetic Risk Profile Using Historical Data.

    PubMed

    Mosley, Jonathan D; van Driest, Sara L; Wells, Quinn S; Shaffer, Christian M; Edwards, Todd L; Bastarache, Lisa; McCarty, Catherine A; Thompson, Will; Chute, Christopher G; Jarvik, Gail P; Crosslin, David R; Larson, Eric B; Kullo, Iftikhar J; Pacheco, Jennifer A; Peissig, Peggy L; Brilliant, Murray H; Linneman, James G; Denny, Josh C; Roden, Dan M

    2016-12-01

    Continued reductions in morbidity and mortality attributable to ischemic heart disease (IHD) require an understanding of the changing epidemiology of this disease. We hypothesized that we could use genetic correlations, which quantify the shared genetic architectures of phenotype pairs and extant risk factors from a historical prospective study to define the risk profile of a contemporary IHD phenotype. We used 37 phenotypes measured in the ARIC study (Atherosclerosis Risk in Communities; n=7716, European ancestry subjects) and clinical diagnoses from an electronic health record (EHR) data set (n=19 093). All subjects had genome-wide single-nucleotide polymorphism genotyping. We measured pairwise genetic correlations (rG) between the ARIC and EHR phenotypes using linear mixed models. The genetic correlation estimates between the ARIC risk factors and the EHR IHD were modestly linearly correlated with hazards ratio estimates for incident IHD in ARIC (Pearson correlation [r]=0.62), indicating that the 2 IHD phenotypes had differing risk profiles. For comparison, this correlation was 0.80 when comparing EHR and ARIC type 2 diabetes mellitus phenotypes. The EHR IHD phenotype was most strongly correlated with ARIC metabolic phenotypes, including total:high-density lipoprotein cholesterol ratio (rG=-0.44, P=0.005), high-density lipoprotein (rG=-0.48, P=0.005), systolic blood pressure (rG=0.44, P=0.02), and triglycerides (rG=0.38, P=0.02). EHR phenotypes related to type 2 diabetes mellitus, atherosclerotic, and hypertensive diseases were also genetically correlated with these ARIC risk factors. The EHR IHD risk profile differed from ARIC and indicates that treatment and prevention efforts in this population should target hypertensive and metabolic disease. © 2016 American Heart Association, Inc.

  12. Characterizing subcritical assemblies with time of flight fixed by energy estimation distributions

    NASA Astrophysics Data System (ADS)

    Monterial, Mateusz; Marleau, Peter; Pozzi, Sara

    2018-04-01

    We present the Time of Flight Fixed by Energy Estimation (TOFFEE) as a measure of the fission chain dynamics in subcritical assemblies. TOFFEE is the time between correlated gamma rays and neutrons, subtracted by the estimated travel time of the incident neutron from its proton recoil. The measured subcritical assembly was the BeRP ball, a 4.482 kg sphere of α-phase weapons grade plutonium metal, which came in five configurations: bare, 0.5, 1, and 1.5 in iron, and 1 in nickel closed fitting shell reflectors. We extend the measurement with MCNPX-PoliMi simulations of shells ranging up to 6 inches in thickness, and two new reflector materials: aluminum and tungsten. We also simulated the BeRP ball with different masses ranging from 1 to 8 kg. A two-region and single-region point kinetics models were used to model the behavior of the positive side of the TOFFEE distribution from 0 to 100 ns. The single region model of the bare cases gave positive linear correlations between estimated and expected neutron decay constants and leakage multiplications. The two-region model provided a way to estimate neutron multiplication for the reflected cases, which correlated positively with expected multiplication, but the nature of the correlation (sub or superlinear) changed between material types. Finally, we found that the areal density of the reflector shells had a linear correlation with the integral of the two-region model fit. Therefore, we expect that with knowledge of reflector composition, one could determine the shell thickness, or vice versa. Furthermore, up to a certain amount and thickness of the reflector, the two-region model provides a way of distinguishing bare and reflected plutonium assemblies.

  13. Linear relationship between water wetting behavior and microscopic interactions of super-hydrophilic surfaces.

    PubMed

    Liu, Jian; Wang, Chunlei; Guo, Pan; Shi, Guosheng; Fang, Haiping

    2013-12-21

    Using molecular dynamics simulations, we show a fine linear relationship between surface energies and microscopic Lennard-Jones parameters of super-hydrophilic surfaces. The linear slope of the super-hydrophilic surfaces is consistent with the linear slope of the super-hydrophobic, hydrophobic, and hydrophilic surfaces where stable water droplets can stand, indicating that there is a universal linear behavior of the surface energies with the water-surface van der Waals interaction that extends from the super-hydrophobic to super-hydrophilic surfaces. Moreover, we find that the linear relationship exists for various substrate types, and the linear slopes of these different types of substrates are dependent on the surface atom density, i.e., higher surface atom densities correspond to larger linear slopes. These results enrich our understanding of water behavior on solid surfaces, especially the water wetting behaviors on uncharged super-hydrophilic metal surfaces.

  14. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  15. Construction of multiple linear regression models using blood biomarkers for selecting against abdominal fat traits in broilers.

    PubMed

    Dong, J Q; Zhang, X Y; Wang, S Z; Jiang, X F; Zhang, K; Ma, G W; Wu, M Q; Li, H; Zhang, H

    2018-01-01

    Plasma very low-density lipoprotein (VLDL) can be used to select for low body fat or abdominal fat (AF) in broilers, but its correlation with AF is limited. We investigated whether any other biochemical indicator can be used in combination with VLDL for a better selective effect. Nineteen plasma biochemical indicators were measured in male chickens from the Northeast Agricultural University broiler lines divergently selected for AF content (NEAUHLF) in the fed state at 46 and 48 d of age. The average concentration of every parameter for the 2 d was used for statistical analysis. Levels of these 19 plasma biochemical parameters were compared between the lean and fat lines. The phenotypic correlations between these plasma biochemical indicators and AF traits were analyzed. Then, multiple linear regression models were constructed to select the best model used for selecting against AF content. and the heritabilities of plasma indicators contained in the best models were estimated. The results showed that 11 plasma biochemical indicators (triglycerides, total bile acid, total protein, globulin, albumin/globulin, aspartate transaminase, alanine transaminase, gamma-glutamyl transpeptidase, uric acid, creatinine, and VLDL) differed significantly between the lean and fat lines (P < 0.01), and correlated significantly with AF traits (P < 0.05). The best multiple linear regression models based on albumin/globulin, VLDL, triglycerides, globulin, total bile acid, and uric acid, had higher R2 (0.73) than the model based only on VLDL (0.21). The plasma parameters included in the best models had moderate heritability estimates (0.21 ≤ h2 ≤ 0.43). These results indicate that these multiple linear regression models can be used to select for lean broiler chickens. © 2017 Poultry Science Association Inc.

  16. Modelling population distribution using remote sensing imagery and location-based data

    NASA Astrophysics Data System (ADS)

    Song, J.; Prishchepov, A. V.

    2017-12-01

    Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.

  17. TH-CD-202-06: A Method for Characterizing and Validating Dynamic Lung Density Change During Quiet Respiration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, T; Ruan, D; Heinrich, M

    2016-06-15

    Purpose: To obtain a functional relationship that calibrates the lung tissue density change under free breathing conditions through correlating Jacobian values to the Hounsfield units. Methods: Free-breathing lung computed tomography images were acquired using a fast helical CT protocol, where 25 scans were acquired per patient. Using a state-of-the-art deformable registration algorithm, a set of the deformation vector fields (DVF) was generated to provide spatial mapping from the reference image geometry to the other free-breathing scans. These DVFs were used to generate Jacobian maps, which estimate voxelwise volume change. Subsequently, the set of 25 corresponding Jacobian and voxel intensity inmore » Hounsfield units (HU) were collected and linear regression was performed based on the mass conservation relationship to correlate the volume change to density change. Based on the resulting fitting coefficients, the tissues were classified into parenchymal (Type I), vascular (Type II), and soft tissue (Type III) types. These coefficients modeled the voxelwise density variation during quiet breathing. The accuracy of the proposed method was assessed using mean absolute difference in HU between the CT scan intensities and the model predicted values. In addition, validation experiments employing a leave-five-out method were performed to evaluate the model accuracy. Results: The computed mean model errors were 23.30±9.54 HU, 29.31±10.67 HU, and 35.56±20.56 HU, respectively, for regions I, II, and III, respectively. The cross validation experiments averaged over 100 trials had mean errors of 30.02 ± 1.67 HU over the entire lung. These mean values were comparable with the estimated CT image background noise. Conclusion: The reported validation experiment statistics confirmed the lung density modeling during free breathing. The proposed technique was general and could be applied to a wide range of problem scenarios where accurate dynamic lung density information is needed. This work was supported in part by NIH R01 CA0096679.« less

  18. Linear-response time-dependent density-functional theory with pairing fields.

    PubMed

    Peng, Degao; van Aggelen, Helen; Yang, Yang; Yang, Weitao

    2014-05-14

    Recent development in particle-particle random phase approximation (pp-RPA) broadens the perspective on ground state correlation energies [H. van Aggelen, Y. Yang, and W. Yang, Phys. Rev. A 88, 030501 (2013), Y. Yang, H. van Aggelen, S. N. Steinmann, D. Peng, and W. Yang, J. Chem. Phys. 139, 174110 (2013); D. Peng, S. N. Steinmann, H. van Aggelen, and W. Yang, J. Chem. Phys. 139, 104112 (2013)] and N ± 2 excitation energies [Y. Yang, H. van Aggelen, and W. Yang, J. Chem. Phys. 139, 224105 (2013)]. So far Hartree-Fock and approximated density-functional orbitals have been utilized to evaluate the pp-RPA equation. In this paper, to further explore the fundamentals and the potential use of pairing matrix dependent functionals, we present the linear-response time-dependent density-functional theory with pairing fields with both adiabatic and frequency-dependent kernels. This theory is related to the density-functional theory and time-dependent density-functional theory for superconductors, but is applied to normal non-superconducting systems for our purpose. Due to the lack of the proof of the one-to-one mapping between the pairing matrix and the pairing field for time-dependent systems, the linear-response theory is established based on the representability assumption of the pairing matrix. The linear response theory justifies the use of approximated density-functionals in the pp-RPA equation. This work sets the fundamentals for future density-functional development to enhance the description of ground state correlation energies and N ± 2 excitation energies.

  19. The effectiveness of tape playbacks in estimating Black Rail densities

    USGS Publications Warehouse

    Legare, M.; Eddleman, W.R.; Buckley, P.A.; Kelly, C.

    1999-01-01

    Tape playback is often the only efficient technique to survey for secretive birds. We measured the vocal responses and movements of radio-tagged black rails (Laterallus jamaicensis; 26 M, 17 F) to playback of vocalizations at 2 sites in Florida during the breeding seasons of 1992-95. We used coefficients from logistic regression equations to model probability of a response conditional to the birds' sex. nesting status, distance to playback source, and time of survey. With a probability of 0.811, nonnesting male black rails were ))lost likely to respond to playback, while nesting females were the least likely to respond (probability = 0.189). We used linear regression to determine daily, monthly and annual variation in response from weekly playback surveys along a fixed route during the breeding seasons of 1993-95. Significant sources of variation in the regression model were month (F3.48 = 3.89, P = 0.014), year (F2.48 = 9.37, P < 0.001), temperature (F1.48 = 5.44, P = 0.024), and month X year (F5.48 = 2.69, P = 0.031). The model was highly significant (P < 0.001) and explained 54% of the variation of mean response per survey period (r2 = 0.54). We combined response probability data from radiotagged black rails with playback survey route data to provide a density estimate of 0.25 birds/ha for the St. Johns National Wildlife Refuge. The relation between the number of black rails heard during playback surveys to the actual number present was influenced by a number of variables. We recommend caution when making density estimates from tape playback surveys

  20. Ionospheric irregularity characteristics from quasiperiodic structure in the radio wave scintillation

    NASA Astrophysics Data System (ADS)

    Chen, K. Y.; Su, S. Y.; Liu, C. H.; Basu, S.

    2005-06-01

    Quasiperiodic (QP) diffraction pattern in scintillation patches has been known to highly correlate with the edge structures of a plasma bubble (Franke et al., 1984). A new time-frequency analysis method of Hilbert-Huang transform (HHT) has been applied to analyze the scintillation data taken at Ascension Island to understand the characteristics of corresponding ionosphere irregularities. The HHT method enables us to extract the quasiperiodic diffraction signals embedded inside the scintillation data and to obtain the characteristics of such diffraction signals. The cross correlation of the two sets of diffraction signals received by two stations at each end of Ascension Island indicates that the density irregularity pattern that causes the diffraction pattern should have an eastward drift velocity of ˜130 m/s. The HHT analysis of the instantaneous frequency in the QP diffraction patterns also reveals some frequency shifts in their peak frequencies. For the QP diffraction pattern caused by the leading edge of the large density gradient at the east wall of a structured bubble, an ascending note in the peak frequency is observed, and for the trailing edge a descending note is observed. The linear change in the transient of the peak frequency in the QP diffraction pattern is consistent with the theory and the simulation result of Franke et al. Estimate of the slope in the transient frequency provides us the information that allows us to identify the locations of plasma walls, and the east-west scale of the irregularity can be estimated. In our case we obtain about 24 km in the east-west scale. Furthermore, the height location of density irregularities that cause the diffraction pattern is estimated to be between 310 and 330 km, that is, around the F peak during observation.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jian; Guo, Pan; University of Chinese Academy of Sciences, Beijing 100049

    Using molecular dynamics simulations, we show a fine linear relationship between surface energies and microscopic Lennard-Jones parameters of super-hydrophilic surfaces. The linear slope of the super-hydrophilic surfaces is consistent with the linear slope of the super-hydrophobic, hydrophobic, and hydrophilic surfaces where stable water droplets can stand, indicating that there is a universal linear behavior of the surface energies with the water-surface van der Waals interaction that extends from the super-hydrophobic to super-hydrophilic surfaces. Moreover, we find that the linear relationship exists for various substrate types, and the linear slopes of these different types of substrates are dependent on the surfacemore » atom density, i.e., higher surface atom densities correspond to larger linear slopes. These results enrich our understanding of water behavior on solid surfaces, especially the water wetting behaviors on uncharged super-hydrophilic metal surfaces.« less

  2. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  3. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  4. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  5. Regional Rates of Young US Forest Growth Estimated From Annual Landsat Disturbance History and IKONOS Stereo Imagery

    NASA Technical Reports Server (NTRS)

    Neigh, Christopher S. R.; Masek, Jeffrey G.; Bourget, Paul; Rishmawi, Khaldoun; Zhao, Feng; Huang, Chengquan; Cook, Bruce D.; Nelson, Ross

    2015-01-01

    Forests of the Contiguous United States (CONUS) have been found to be a large contributor to the global atmospheric carbon sink. The magnitude and nature of this sink is still uncertain and recent studies have sought to define the dynamics that control its strength and longevity. The Landsat series of satellites has been a vital resource to understand the long-term changes in land cover that can impact ecosystem function and terrestrial carbonstock. We combine annual Landsat forest disturbance history from 1985 to 2011 with single date IKONOS stereoimagery to estimate the change in young forest canopy height and above ground live dry biomass accumulation for selected sites in the CONUS. Our approach follows an approximately linear growth rate following clearing over short intervals and does not estimate the distinct non-linear growth rate over longer intervals.We produced canopy height models by differencing digital surface models estimated from IKONOS stereo pairs with national elevation data (NED). Correlations between height and biomass were established independently using airborne LiDAR, and then applied to the IKONOS-estimated canopy height models. Graphing current biomass against time since disturbance provided biomass accumulation rates. For 20 study sites distributed across five regions of the CONUS, 19 showed statistically significant recovery trends (p is less than 0.001) with canopy growth from 0.26 m yr-1to 0.73 m yr-1. Above ground live dry biomass (AGB) density accumulation ranged from 1.31 t/ha yr-1 to 12.47 t/ha yr-1. Mean forest AGB accumulationwas 6.31 t/ha yr-1 among all sites with significant growth trends. We evaluated the accuracy of our estimates by comparing to field estimated site index curves of growth, airborne LiDAR data, and independent model predictions of C accumulation. Growth estimates found with this approach are consistent with site index curves and total biomass estimates fall within the range of field estimates. This is aviable approach to estimate forest biomass accumulation in regions with clear-cut harvest disturbances.

  6. On the use of a physically-based baseflow timescale in land surface models.

    NASA Astrophysics Data System (ADS)

    Jost, A.; Schneider, A. C.; Oudin, L.; Ducharne, A.

    2017-12-01

    Groundwater discharge is an important component of streamflow and estimating its spatio-temporal variation in response to changes in recharge is of great value to water resource planning, and essential for modelling accurate large scale water balance in land surface models (LSMs). First-order representation of groundwater as a single linear storage element is frequently used in LSMs for the sake of simplicity, but requires a suitable parametrization of the aquifer hydraulic behaviour in the form of the baseflow characteristic timescale (τ). Such a modelling approach can be hampered by the lack of available calibration data at global scale. Hydraulic groundwater theory provides an analytical framework to relate the baseflow characteristics to catchment descriptors. In this study, we use the long-time solution of the linearized Boussinesq equation to estimate τ at global scale, as a function of groundwater flow length and aquifer hydraulic diffusivity. Our goal is to evaluate the use of this spatially variable and physically-based τ in the ORCHIDEE surface model in terms of simulated river discharges across large catchments. Aquifer transmissivity and drainable porosity stem from GLHYMPS high-resolution datasets whereas flow length is derived from an estimation of drainage density, using the GRIN global river network. ORCHIDEE is run in offline mode and its results are compared to a reference simulation using an almost spatially constant topographic-dependent τ. We discuss the limits of our approach in terms of both the relevance and accuracy of global estimates of aquifer hydraulic properties and the extent to which the underlying assumptions in the analytical method are valid.

  7. Comparison of turbulence estimation for four- and five-beam ADCP configurations

    NASA Astrophysics Data System (ADS)

    Togneri, Michael; Masters, Ian; Jones, Dale

    2017-04-01

    Turbulence is a vital consideration for tidal power generation, as the resulting fluctuating loads greatly impact the fatigue life of tidal turbines and their components. Acoustic Doppler current profilers (ADCPs) are one of the most common tools for measurement of currents in tidal power applications, and although most often used for assessment of mean current properties they are also capable of measuring turbulence parameters. Conventional ADCPs use four diverging beams in a so-called 'Janus' configuration, but more recent models employ an additional vertical beam. In this paper we explore the improvements to turbulence measurements that are made possible by the addition of the fifth beam, with a focus on estimation of turbulent kinetic energy (TKE) density. The standard approach for estimating TKE density from ADCP measurements is the variance method. As each of the diverging beams measures a single velocity component at spatially-separated points, it is not possible to find the TKE density by a straightforward combination of beam measurements. Instead, we must assume that the statistical properties of the turbulence are uniform across the spatial extent of the beams; it is then possible to express the TKE density as a linear combination of the velocity variance as measured by each beam. In the four-beam configuration, an additional assumption regarding the magnitude of the turbulent anisotropy: a parameter ξ is introduced that characterises the proportion of TKE in the vertical fluctuations. With the five-beam configuration, direct measurements of the vertical component are available and this assumption is no longer required. In this paper, turbulence measurements from a five-beam ADCP deployed off the coast of Anglesey in 2014 are analysed. We compare turbulence estimates using all five beams to estimates obtained using only the conventional four-beam setup by discarding the vertical beam data. This allows us to quantify the error in the standard value of ξ. We find that it is on average within 3.4% of the real value, although there are times for which it is much greater. We also discuss the Doppler noise correction in the five-beam case, which is more complex than the four-beam case due to the different noise properties of the vertical beam.

  8. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope

  9. Carbon - Bulk Density Relationships for Highly Weathered Soils of the Americas

    NASA Astrophysics Data System (ADS)

    Nave, L. E.

    2014-12-01

    Soils are dynamic natural bodies composed of mineral and organic materials. As a result of this mixed composition, essential properties of soils such as their apparent density, organic and mineral contents are typically correlated. Negative relationships between bulk density (Db) and organic matter concentration provide well-known examples across a broad range of soils, and such quantitative relationships among soil properties are useful for a variety of applications. First, gap-filling or data interpolation often are necessary to develop large soil carbon (C) datasets; furthermore, limitations of access to analytical instruments may preclude C determinations for every soil sample. In such cases, equations to derive soil C concentrations from basic measures of soil mass, volume, and density offer significant potential for purposes of soil C stock estimation. To facilitate estimation of soil C stocks on highly weathered soils of the Americas, I used observations from the International Soil Carbon Network (ISCN) database to develop carbon - bulk density prediction equations for Oxisols and Ultisols. Within a small sample set of georeferenced Oxisols (n=89), 29% of the variation in A horizon C concentrations can be predicted from Db. Including the A-horizon sand content improves predictive capacity to 35%. B horizon C concentrations (n=285) were best predicted by Db and clay content, but were more variable than A-horizons (only 10% of variation explained by linear regression). Among Ultisols, a larger sample set allowed investigation of specific horizons of interest. For example, C concentrations of plowed A (Ap) horizons are predictable based on Db, sand and silt contents (n=804, r2=0.38); gleyed argillic (Btg) horizon concentrations are predictable from Db, sand and clay contents (n=190, r2=0.23). Because soil C stock estimates are more sensitive to variation in soil mass and volume determinations than to variation in C concentration, prediction equations such as these may be used on carefully collected samples to constrain soil C stocks. The geo-referenced ISCN database allows users the opportunity to derive similar predictive relationships among measured soil parameters; continued input of new datasets from highly weathered soils of the Americas will improve the precision of these prediction equations.

  10. Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique

    NASA Astrophysics Data System (ADS)

    Boulanger, Olivier

    The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)

  11. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.

    PubMed

    Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.

  12. Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification

    PubMed Central

    Feng, Yang; Jiang, Jiancheng; Tong, Xin

    2015-01-01

    We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing. PMID:27185970

  13. The role of predictive uncertainty in the operational management of reservoirs

    NASA Astrophysics Data System (ADS)

    Todini, E.

    2014-09-01

    The present work deals with the operational management of multi-purpose reservoirs, whose optimisation-based rules are derived, in the planning phase, via deterministic (linear and nonlinear programming, dynamic programming, etc.) or via stochastic (generally stochastic dynamic programming) approaches. In operation, the resulting deterministic or stochastic optimised operating rules are then triggered based on inflow predictions. In order to fully benefit from predictions, one must avoid using them as direct inputs to the reservoirs, but rather assess the "predictive knowledge" in terms of a predictive probability density to be operationally used in the decision making process for the estimation of expected benefits and/or expected losses. Using a theoretical and extremely simplified case, it will be shown why directly using model forecasts instead of the full predictive density leads to less robust reservoir management decisions. Moreover, the effectiveness and the tangible benefits for using the entire predictive probability density instead of the model predicted values will be demonstrated on the basis of the Lake Como management system, operational since 1997, as well as on the basis of a case study on the lake of Aswan.

  14. Threshold foraging behavior of baleen whales

    USGS Publications Warehouse

    Piatt, John F.; Methven, David A.

    1992-01-01

    We conducted hydroacoustic surveys for capelin Mallotus villosus in Witless Bay, Newfoundland, Canada, on 61 days during the summers of 1983 to 1985. On 32 of those days in whlch capelin surveys were conducted, we observed a total of 129 baleen whales - Including 93 humpback Megaptera novaeangliae, 31 minke Balaenoptera acutorostrata and 5 fin whales B. phvsalus. Although a few whales were observed when capelin schools were scarce, the majority (96%) of whales were observed when mean daily capelin densities exceeded 5 schools per linear km surveyed (range of means over 3 yr: 0.0 to 14.0 schools km-1). Plots of daily whale abundance (no. h-1 surveyed) vs daily capelin school density (mean no. schools km-1 surveyed) in each summer revealed that baleen whales have a threshold foraging response to capelin density. Thresholds were estimated using a simple itterative step-function model. Foraging thresholds of baleen whales (7.3, 5.0, and 5.8 schools km-1) varied between years in relation to the overall abundance of capelin schools in the study area during summer (means of 7.2, 3.3, and 5.3 schools km-1, respectively).

  15. Quantile regression models of animal habitat relationships

    USGS Publications Warehouse

    Cade, Brian S.

    2003-01-01

    Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.

  16. Use of nonlinear programming to optimize performance response to energy density in broiler feed formulation.

    PubMed

    Guevara, V R

    2004-02-01

    A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.

  17. Dose uncertainties associated with a set density override of unknown hip prosthetic composition.

    PubMed

    Rijken, James D; Colyer, Christopher J

    2017-09-01

    The dosimetric uncertainties associated with radiotherapy through hip prostheses while overriding the implant to a set density within the TPS has not yet been reported. In this study, the uncertainty in dose within a PTV resulting from this planning choice was investigated. A set of metallic hip prosthetics (stainless steel, titanium, and two different Co-Cr-Mo alloys) were CT scanned in a water bath. Within the TPS, the prosthetic pieces were overridden to densities between 3 and 10 g/cm 3 and irradiated on a linear accelerator. Measured dose maps were compared to the TPS to determine which density was most appropriate to override each metal. This was shown to be in disagreement with the reported literature values of density which was attributed to the TPS dose calculation algorithm and total mass attenuation coefficient differences in water and metal. The dose difference was then calculated for a set density override of 6 g/cm 3 in the TPS and used to estimate the dose uncertainty beyond the prosthesis. For beams passing through an implant, the dosimetric uncertainty in regions of the PTV may be as high as 10% if the implant composition remains unknown and a set density override is used. These results highlight limitations of such assumptions and the need for careful consideration by radiation oncologist, therapist, and physics staff. © 2017 Adelaide Radiotherapy Centre. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  18. Electron density and plasma dynamics of a spherical theta pinch

    NASA Astrophysics Data System (ADS)

    Teske, C.; Liu, Y.; Blaes, S.; Jacoby, J.

    2012-03-01

    A spherical theta pinch for plasma stripper applications has been developed and investigated regarding the electron density and the plasma confinement during the pinching sequence. The setup consists of a 6 μH induction coil surrounding a 4000 ml spherical discharge vessel and a capacitor bank with interchangeable capacitors leading to an overall capacitance of 34 μF and 50 μF, respectively. A thyristor switch is used for driving the resonant circuit. Pulsed coil currents reached values of up to 26 kA with maximum induction of 500 mT. Typical gas pressures were 0.7 Pa up to 120 Pa with ArH2 (2.8% H2)-gas as a discharge medium. Stark broadening measurements of the Hβ emission line were carried out in order to evaluate the electron density of the discharge. In accordance with the density measurements, the transfer efficiency was estimated and a scaling law between electron density and discharge energy was established for the current setup. The densities reached values of up to 8 × 1022 m-3 for an energy of 1.6 kJ transferred into the plasma. Further, the pinching of the discharge plasma was documented and the different stages of the pinching process were analyzed. The experimental evidence suggests that concerning the recent setup of the spherical theta pinch, a linear scaling law between the transferred energy and the achievable plasma density can be applied for various applications like plasma strippers and pulsed ion sources.

  19. Demonstration of line transect methodologies to estimate urban gray squirrel density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hein, E.W.

    1997-11-01

    Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimatingmore » urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.« less

  20. Study of laser-generated debris free x-ray sources produced in a high-density linear Ar, Kr, Xe, Kr/Ar and Xe/Kr/Ar mixtures gas jets by 2 ω, sub-ps LLNL Titan laser

    NASA Astrophysics Data System (ADS)

    Kantsyrev, V. L.; Schultz, K. A.; Shlyaptseva, V. V.; Safronova, A. S.; Cooper, M. C.; Shrestha, I. K.; Petkov, E. E.; Stafford, A.; Moschella, J. J.; Schmidt-Petersen, M. T.; Butcher, C. J.; Kemp, G. E.; Andrews, S. D.; Fournier, K. B.

    2016-10-01

    The study of laser-generated debris-free x-ray sources in an underdense plasma produced in a high-density linear gas-puff jet was carried out at the LLNL Titan laser (2 ω, 45 J, sub-ps) with an intensity in the 10 um focal spot of 7 x 1019 W/cm2. A linear nozzle with a fast valve was used for the generation of a clusters/gas jet. X-ray diagnostics for the spectral region of 0.7 - 9 keV include: two spectrometers and pinhole cameras, and 3 groups of fast filtered detectors. Electron beams were measured with the EPPS magnetic spectrometer (>1 MeV) and Faraday cups (>72 keV). Spectralon/spectrometer devices were also used to measure absorption of laser radiation in the jets. New results were obtained on: anisotropic generation of x-rays (laser to x-ray conversion coefficient was >1%) and characteristics of laser-generated electron beams; evolution of x-ray generation with the location of the laser focus in a cluster-gas jet, and observations of a strong x-ray flash in some focusing regimes. Non-LTE kinetic modeling was used to estimate plasma parameters. UNR work supported by the DTRA Basic Research Award # HDTRA1-13-1-0033. Work at LLNL was performed under the auspices of the U.S. DOE by LLNL under Contract DE-AC52-07NA27344.

  1. Diet density during the first week of life: Effects on energy and nitrogen balance characteristics of broiler chickens.

    PubMed

    Lamot, D M; Sapkota, D; Wijtten, P J A; van den Anker, I; Heetkamp, M J W; Kemp, B; van den Brand, H

    2017-07-01

    This study aimed to determine effects of diet density on growth performance, energy balance, and nitrogen (N) balance characteristics of broiler chickens during the first wk of life. Effects of diet density were studied using a dose-response design consisting of 5 dietary fat levels (3.5, 7.0, 10.5, 14.0, and 17.5%). The relative difference in dietary energy level was used to increase amino acid levels, mineral levels, and the premix inclusion level at the same ratio. Chickens were housed in open-circuit climate respiration chambers from d 0 to 7 after hatch. Body weight was measured on d 0 and 7, whereas feed intake was determined daily. For calculation of energy balances, O2 and CO2 exchange were measured continuously and all excreta from d 0 to 7 was collected and analyzed at d 7. Average daily gain (ADG) and average daily feed intake (ADFI) decreased linearly (P = 0.047 and P < 0.001, respectively), whereas gain to feed ratio increased (P < 0.001) with increasing diet density. Gross energy (GE) intake and metabolizable energy (ME) intake were not affected by diet density, but the ratio between ME and GE intake decreased linearly with increasing diet density (P = 0.006). Fat, N, and GE efficiencies (expressed as gain per unit of nutrient intake), heat production, and respiratory exchange ratio (CO2 to O2 ratio) decreased linearly (P < 0.001) as diet density increased. Energy retention, N intake, and N retention were not affected by diet density. We conclude that a higher diet density in the first wk of life of broiler chickens did not affect protein and fat retention, whereas the ME to GE ratio decreased linearly with increased diet density. This suggests that diet density appears to affect digestibility rather than utilization of nutrients. © 2017 Poultry Science Association Inc.

  2. Using kernel density estimation to understand the influence of neighbourhood destinations on BMI

    PubMed Central

    King, Tania L; Bentley, Rebecca J; Thornton, Lukar E; Kavanagh, Anne M

    2016-01-01

    Objectives Little is known about how the distribution of destinations in the local neighbourhood is related to body mass index (BMI). Kernel density estimation (KDE) is a spatial analysis technique that accounts for the location of features relative to each other. Using KDE, this study investigated whether individuals living near destinations (shops and service facilities) that are more intensely distributed rather than dispersed, have lower BMIs. Study design and setting A cross-sectional study of 2349 residents of 50 urban areas in metropolitan Melbourne, Australia. Methods Destinations were geocoded, and kernel density estimates of destination intensity were created using kernels of 400, 800 and 1200 m. Using multilevel linear regression, the association between destination intensity (classified in quintiles Q1(least)–Q5(most)) and BMI was estimated in models that adjusted for the following confounders: age, sex, country of birth, education, dominant household occupation, household type, disability/injury and area disadvantage. Separate models included a physical activity variable. Results For kernels of 800 and 1200 m, there was an inverse relationship between BMI and more intensely distributed destinations (compared to areas with least destination intensity). Effects were significant at 1200 m: Q4, β −0.86, 95% CI −1.58 to −0.13, p=0.022; Q5, β −1.03 95% CI −1.65 to −0.41, p=0.001. Inclusion of physical activity in the models attenuated effects, although effects remained marginally significant for Q5 at 1200 m: β −0.77 95% CI −1.52, −0.02, p=0.045. Conclusions This study conducted within urban Melbourne, Australia, found that participants living in areas of greater destination intensity within 1200 m of home had lower BMIs. Effects were partly explained by physical activity. The results suggest that increasing the intensity of destination distribution could reduce BMI levels by encouraging higher levels of physical activity. PMID:26883235

  3. Performance of the Cottonscan Instrument for Measuring the Average Fiber Linear Density (Fineness) of Cotton Lint Samples

    USDA-ARS?s Scientific Manuscript database

    This paper explores the CottonscanTM instrument, a new technology designed for routine measurement of the average linear density (fineness) of cotton fiber. A major international inter-laboratory trial of the CottonscanTM system is presented. This expands the range of cottons and laboratories fro...

  4. Precision of the upgraded cottonscan instrument for measuring the average fiber linear density (fineness) of cotton lint samples

    USDA-ARS?s Scientific Manuscript database

    An inter-laboratory trial was conducted to validate the operation of the CottonscanTM technology as useful technique for determining the average fiber linear density of cotton. A significant inter-laboratory trial was completed and confirmed that the technology is quite acceptable. For fibers fin...

  5. Analysis of the processes occurring in a submicrosecond discharge with a linear current density of up to 3 MA/cm through a thick-wall stainless-steel electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branitsky, A. V.; Grabovski, E. V.; Dzhangobegov, V. V.

    The state of conductors carrying a megampere current from the generator to the load is studied experimentally. It is found that the plasma produced from cylindrical stainless-steel tubes during the passage of a submicrosecond current pulse with a linear density of 3 MA/cm expands with a velocity of 5.5 km/s. Numerical results on the diffusion of the magnetic field induced by a current with a linear density of 1–3MA/cm into metal electrodes agree with the experimental data on the penetration time of the magnetic field. For a linear current density of 3.1 MA/cm, the experimentally determined electric field strength onmore » the inner surface of the tube is 4 kV/cm. The calculated electric field strength on the inner surface of the tube turns out to be two times higher, which can be explained by plasma production on the outer and inner surfaces of the electrode.« less

  6. An analysis of a large dataset on immigrant integration in Spain. The Statistical Mechanics perspective on Social Action

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Contucci, Pierluigi; Sandell, Rickard; Vernia, Cecilia

    2014-02-01

    How does immigrant integration in a country change with immigration density? Guided by a statistical mechanics perspective we propose a novel approach to this problem. The analysis focuses on classical integration quantifiers such as the percentage of jobs (temporary and permanent) given to immigrants, mixed marriages, and newborns with parents of mixed origin. We find that the average values of different quantifiers may exhibit either linear or non-linear growth on immigrant density and we suggest that social action, a concept identified by Max Weber, causes the observed non-linearity. Using the statistical mechanics notion of interaction to quantitatively emulate social action, a unified mathematical model for integration is proposed and it is shown to explain both growth behaviors observed. The linear theory instead, ignoring the possibility of interaction effects would underestimate the quantifiers up to 30% when immigrant densities are low, and overestimate them as much when densities are high. The capacity to quantitatively isolate different types of integration mechanisms makes our framework a suitable tool in the quest for more efficient integration policies.

  7. Accuracy of Multi-echo Magnitude-based MRI (M-MRI) for Estimation of Hepatic Proton Density Fat Fraction (PDFF) in Children

    PubMed Central

    Zand, Kevin A.; Shah, Amol; Heba, Elhamy; Wolfson, Tanya; Hamilton, Gavin; Lam, Jessica; Chen, Joshua; Hooker, Jonathan C.; Gamst, Anthony C.; Middleton, Michael S.; Schwimmer, Jeffrey B.; Sirlin, Claude B.

    2015-01-01

    Purpose To assess accuracy of magnitude-based magnetic resonance imaging (M-MRI) in children to estimate hepatic proton density fat fraction (PDFF) using two to six echoes, with magnetic resonance spectroscopy (MRS)-measured PDFF as a reference standard. Materials and Methods This was an IRB-approved, HIPAA-compliant, single-center, cross-sectional, retrospective analysis of data collected prospectively between 2008 and 2013 in children with known or suspected non-alcoholic fatty liver disease (NAFLD). Two hundred and eighty-six children (8 – 20 [mean 14.2 ± 2.5] yrs; 182 boys) underwent same-day MRS and M-MRI. Unenhanced two-dimensional axial spoiled gradient-recalled-echo images at six echo times were obtained at 3T after a single low-flip-angle (10°) excitation with ≥ 120-ms recovery time. Hepatic PDFF was estimated using the first two, three, four, five, and all six echoes. For each number of echoes, accuracy of M-MRI to estimate PDFF was assessed by linear regression with MRS-PDFF as reference standard. Accuracy metrics were regression intercept, slope, average bias, and R2. Results MRS-PDFF ranged from 0.2 – 40.4% (mean 13.1 ± 9.8%). Using three to six echoes, regression intercept, slope, and average bias were 0.46 – 0.96%, 0.99 – 1.01, and 0.57 – 0.89%, respectively. Using two echoes, these values were 2.98%, 0.97, and 2.72%, respectively. R2 ranged 0.98 – 0.99 for all methods. Conclusion Using three to six echoes, M-MRI has high accuracy for hepatic PDFF estimation in children. PMID:25847512

  8. Accuracy of multiecho magnitude-based MRI (M-MRI) for estimation of hepatic proton density fat fraction (PDFF) in children.

    PubMed

    Zand, Kevin A; Shah, Amol; Heba, Elhamy; Wolfson, Tanya; Hamilton, Gavin; Lam, Jessica; Chen, Joshua; Hooker, Jonathan C; Gamst, Anthony C; Middleton, Michael S; Schwimmer, Jeffrey B; Sirlin, Claude B

    2015-11-01

    To assess accuracy of magnitude-based magnetic resonance imaging (M-MRI) in children to estimate hepatic proton density fat fraction (PDFF) using two to six echoes, with magnetic resonance spectroscopy (MRS) -measured PDFF as a reference standard. This was an IRB-approved, HIPAA-compliant, single-center, cross-sectional, retrospective analysis of data collected prospectively between 2008 and 2013 in children with known or suspected nonalcoholic fatty liver disease (NAFLD). Two hundred eighty-six children (8-20 [mean 14.2 ± 2.5] years; 182 boys) underwent same-day MRS and M-MRI. Unenhanced two-dimensional axial spoiled gradient-recalled-echo images at six echo times were obtained at 3T after a single low-flip-angle (10°) excitation with ≥ 120-ms recovery time. Hepatic PDFF was estimated using the first two, three, four, five, and all six echoes. For each number of echoes, accuracy of M-MRI to estimate PDFF was assessed by linear regression with MRS-PDFF as reference standard. Accuracy metrics were regression intercept, slope, average bias, and R(2) . MRS-PDFF ranged from 0.2-40.4% (mean 13.1 ± 9.8%). Using three to six echoes, regression intercept, slope, and average bias were 0.46-0.96%, 0.99-1.01, and 0.57-0.89%, respectively. Using two echoes, these values were 2.98%, 0.97, and 2.72%, respectively. R(2) ranged 0.98-0.99 for all methods. Using three to six echoes, M-MRI has high accuracy for hepatic PDFF estimation in children. © 2015 Wiley Periodicals, Inc.

  9. Estimating Volume, Biomass, and Carbon in Hedmark County, Norway Using a Profiling LiDAR

    NASA Technical Reports Server (NTRS)

    Nelson, Ross; Naesset, Erik; Gobakken, T.; Gregoire, T.; Stahl, G.

    2009-01-01

    A profiling airborne LiDAR is used to estimate the forest resources of Hedmark County, Norway, a 27390 square kilometer area in southeastern Norway on the Swedish border. One hundred five profiling flight lines totaling 9166 km were flown over the entire county; east-west. The lines, spaced 3 km apart north-south, duplicate the systematic pattern of the Norwegian Forest Inventory (NFI) ground plot arrangement, enabling the profiler to transit 1290 circular, 250 square meter fixed-area NFI ground plots while collecting the systematic LiDAR sample. Seven hundred sixty-three plots of the 1290 plots were overflown within 17.8 m of plot center. Laser measurements of canopy height and crown density are extracted along fixed-length, 17.8 m segments closest to the center of the ground plot and related to basal area, timber volume and above- and belowground dry biomass. Linear, nonstratified equations that estimate ground-measured total aboveground dry biomass report an R(sup 2) = 0.63, with an regression RMSE = 35.2 t/ha. Nonstratified model results for the other biomass components, volume, and basal area are similar, with R(sup 2) values for all models ranging from 0.58 (belowground biomass, RMSE = 8.6 t/ha) to 0.63. Consistently, the most useful single profiling LiDAR variable is quadratic mean canopy height, h (sup bar)(sub qa). Two-variable models typically include h (sup bar)(sub qa) or mean canopy height, h(sup bar)(sub a), with a canopy density or a canopy height standard deviation measure. Stratification by productivity class did not improve the nonstratified models, nor did stratification by pine/spruce/hardwood. County-wide profiling LiDAR estimates are reported, by land cover type, and compared to NFI estimates.

  10. Ant-inspired density estimation via random walks

    PubMed Central

    Musco, Cameron; Su, Hsin-Hao

    2017-01-01

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146

  11. State Estimation for Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    21 2.2.1 Linear Inverted Pendulum Model . . . . . . . . . . . . . . . . . . . 21 2.2.2 Planar Five-link Model...Linear Inverted Pendulum Model. LVDT Linear Variable Differential Transformers. MEMS Microelectromechanical Systems. MHE Moving Horizon Estimator. QP...

  12. Response statistics of rotating shaft with non-linear elastic restoring forces by path integration

    NASA Astrophysics Data System (ADS)

    Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael

    2017-07-01

    Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.

  13. Linear viscoelasticity and thermorheological simplicity of n-hexadecane fluids under oscillatory shear via non-equilibrium molecular dynamics simulations.

    PubMed

    Tseng, Huan-Chang; Wu, Jiann-Shing; Chang, Rong-Yeu

    2010-04-28

    A small amplitude oscillatory shear flows with the classic characteristic of a phase shift when using non-equilibrium molecular dynamics simulations for n-hexadecane fluids. In a suitable range of strain amplitude, the fluid possesses significant linear viscoelastic behavior. Non-linear viscoelastic behavior of strain thinning, which means the dynamic modulus monotonously decreased with increasing strain amplitudes, was found at extreme strain amplitudes. Under isobaric conditions, different temperatures strongly affected the range of linear viscoelasticity and the slope of strain thinning. The fluid's phase states, containing solid-, liquid-, and gel-like states, can be distinguished through a criterion of the viscoelastic spectrum. As a result, a particular condition for the viscoelastic behavior of n-hexadecane molecules approaching that of the Rouse chain was obtained. Besides, more importantly, evidence of thermorheologically simple materials was presented in which the relaxation modulus obeys the time-temperature superposition principle. Therefore, using shift factors from the time-temperature superposition principle, the estimated Arrhenius flow activation energy was in good agreement with related experimental values. Furthermore, one relaxation modulus master curve well exhibited both transition and terminal zones. Especially regarding non-equilibrium thermodynamic states, variations in the density, with respect to frequencies, were revealed.

  14. Non-linear Analysis of Scalp EEG by Using Bispectra: The Effect of the Reference Choice

    PubMed Central

    Chella, Federico; D'Andrea, Antea; Basti, Alessio; Pizzella, Vittorio; Marzetti, Laura

    2017-01-01

    Bispectral analysis is a signal processing technique that makes it possible to capture the non-linear and non-Gaussian properties of the EEG signals. It has found various applications in EEG research and clinical practice, including the assessment of anesthetic depth, the identification of epileptic seizures, and more recently, the evaluation of non-linear cross-frequency brain functional connectivity. However, the validity and reliability of the indices drawn from bispectral analysis of EEG signals are potentially biased by the use of a non-neutral EEG reference. The present study aims at investigating the effects of the reference choice on the analysis of the non-linear features of EEG signals through bicoherence, as well as on the estimation of cross-frequency EEG connectivity through two different non-linear measures, i.e., the cross-bicoherence and the antisymmetric cross-bicoherence. To this end, four commonly used reference schemes were considered: the vertex electrode (Cz), the digitally linked mastoids, the average reference, and the Reference Electrode Standardization Technique (REST). The reference effects were assessed both in simulations and in a real EEG experiment. The simulations allowed to investigated: (i) the effects of the electrode density on the performance of the above references in the estimation of bispectral measures; and (ii) the effects of the head model accuracy in the performance of the REST. For real data, the EEG signals recorded from 10 subjects during eyes open resting state were examined, and the distortions induced by the reference choice in the patterns of alpha-beta bicoherence, cross-bicoherence, and antisymmetric cross-bicoherence were assessed. The results showed significant differences in the findings depending on the chosen reference, with the REST providing superior performance than all the other references in approximating the ideal neutral reference. In conclusion, this study highlights the importance of considering the effects of the reference choice in the interpretation and comparison of the results of bispectral analysis of scalp EEG. PMID:28559790

  15. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  16. Competitions between Rayleigh-Taylor instability and Kelvin-Helmholtz instability with continuous density and velocity profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, W. H.; He, X. T.; CAPT, Peking University, Beijing 100871

    2011-02-15

    In this research, competitions between Rayleigh-Taylor instability (RTI) and Kelvin-Helmholtz instability (KHI) in two-dimensional incompressible fluids within a linear growth regime are investigated analytically. Normalized linear growth rate formulas for both the RTI, suitable for arbitrary density ratio with continuous density profile, and the KHI, suitable for arbitrary density ratio with continuous density and velocity profiles, are obtained. The linear growth rates of pure RTI ({gamma}{sub RT}), pure KHI ({gamma}{sub KH}), and combined RTI and KHI ({gamma}{sub total}) are investigated, respectively. In the pure RTI, it is found that the effect of the finite thickness of the density transition layermore » (L{sub {rho}}) reduces the linear growth of the RTI (stabilizes the RTI). In the pure KHI, it is found that conversely, the effect of the finite thickness of the density transition layer increases the linear growth of the KHI (destabilizes the KHI). It is found that the effect of the finite thickness of the density transition layer decreases the ''effective'' or ''local'' Atwood number (A) for both the RTI and the KHI. However, based on the properties of {gamma}{sub RT}{proportional_to}{radical}(A) and {gamma}{sub KH}{proportional_to}{radical}(1-A{sup 2}), the effect of the finite thickness of the density transition layer therefore has a completely opposite role on the RTI and the KHI noted above. In addition, it is found that the effect of the finite thickness of the velocity shear layer (L{sub u}) stabilizes the KHI, and for the most cases, the combined effects of the finite thickness of the density transition layer and the velocity shear layer (L{sub {rho}=}L{sub u}) also stabilize the KHI. Regarding the combined RTI and KHI, it is found that there is a competition between the RTI and the KHI because of the completely opposite effect of the finite thickness of the density transition layer on these two kinds of instability. It is found that the competitions between the RTI and the KHI depend, respectively, on the Froude number, the density ratio of the light fluid to the heavy one, and the finite thicknesses of the density transition layer and the velocity shear layer. Furthermore, for the fixed Froude number, the linear growth rate ratio of the RTI to the KHI decreases with both the density ratio and the finite thickness of the density transition layer, but increases with the finite thickness of the velocity shear layer and the combined finite thicknesses of the density transition layer and the velocity shear layer (L{sub {rho}=}L{sub u}). In summary, our analytical results show that the effect of the finite thickness of the density transition layer stabilizes the RTI and the overall combined effects of the finite thickness of the density transition layer and the velocity shear layer (L{sub {rho}=}L{sub u}) also stabilize the KHI. Thus, it should be included in applications where the transition layer effect plays an important role, such as the formation of large-scale structures (jets) in high energy density physics and astrophysics and turbulent mixing.« less

  17. A Keystone Ant Species Provides Robust Biological Control of the Coffee Berry Borer Under Varying Pest Densities.

    PubMed

    Morris, Jonathan R; Vandermeer, John; Perfecto, Ivette

    2015-01-01

    Species' functional traits are an important part of the ecological complexity that determines the provisioning of ecosystem services. In biological pest control, predator response to pest density variation is a dynamic trait that impacts the provision of this service in agroecosystems. When pest populations fluctuate, farmers relying on biocontrol services need to know how natural enemies respond to these changes. Here we test the effect of variation in coffee berry borer (CBB) density on the biocontrol efficiency of a keystone ant species (Azteca sericeasur) in a coffee agroecosystem. We performed exclosure experiments to measure the infestation rate of CBB released on coffee branches in the presence and absence of ants at four different CBB density levels. We measured infestation rate as the number of CBB bored into fruits after 24 hours, quantified biocontrol efficiency (BCE) as the proportion of infesting CBB removed by ants, and estimated functional response from ant attack rates, measured as the difference in CBB infestation between branches. Infestation rates of CBB on branches with ants were significantly lower (71%-82%) than on those without ants across all density levels. Additionally, biocontrol efficiency was generally high and did not significantly vary across pest density treatments. Furthermore, ant attack rates increased linearly with increasing CBB density, suggesting a Type I functional response. These results demonstrate that ants can provide robust biological control of CBB, despite variation in pest density, and that the response of predators to pest density variation is an important factor in the provision of biocontrol services. Considering how natural enemies respond to changes in pest densities will allow for more accurate biocontrol predictions and better-informed management of this ecosystem service in agroecosystems.

  18. A Keystone Ant Species Provides Robust Biological Control of the Coffee Berry Borer Under Varying Pest Densities

    PubMed Central

    Morris, Jonathan R.; Vandermeer, John; Perfecto, Ivette

    2015-01-01

    Species’ functional traits are an important part of the ecological complexity that determines the provisioning of ecosystem services. In biological pest control, predator response to pest density variation is a dynamic trait that impacts the provision of this service in agroecosystems. When pest populations fluctuate, farmers relying on biocontrol services need to know how natural enemies respond to these changes. Here we test the effect of variation in coffee berry borer (CBB) density on the biocontrol efficiency of a keystone ant species (Azteca sericeasur) in a coffee agroecosystem. We performed exclosure experiments to measure the infestation rate of CBB released on coffee branches in the presence and absence of ants at four different CBB density levels. We measured infestation rate as the number of CBB bored into fruits after 24 hours, quantified biocontrol efficiency (BCE) as the proportion of infesting CBB removed by ants, and estimated functional response from ant attack rates, measured as the difference in CBB infestation between branches. Infestation rates of CBB on branches with ants were significantly lower (71%-82%) than on those without ants across all density levels. Additionally, biocontrol efficiency was generally high and did not significantly vary across pest density treatments. Furthermore, ant attack rates increased linearly with increasing CBB density, suggesting a Type I functional response. These results demonstrate that ants can provide robust biological control of CBB, despite variation in pest density, and that the response of predators to pest density variation is an important factor in the provision of biocontrol services. Considering how natural enemies respond to changes in pest densities will allow for more accurate biocontrol predictions and better-informed management of this ecosystem service in agroecosystems. PMID:26562676

  19. On the design of classifiers for crop inventories

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.; Takacs, H. C.

    1986-01-01

    Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.

  20. Cosmological Density and Power Spectrum from Peculiar Velocities: Nonlinear Corrections and Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Silberman, L.; Dekel, A.; Eldar, A.; Zehavi, I.

    2001-08-01

    We allow for nonlinear effects in the likelihood analysis of galaxy peculiar velocities and obtain ~35% lower values for the cosmological density parameter Ωm and for the amplitude of mass density fluctuations σ8Ω0.6m. This result is obtained under the assumption that the power spectrum in the linear regime is of the flat ΛCDM model (h=0.65, n=1, COBE normalized) with only Ωm as a free parameter. Since the likelihood is driven by the nonlinear regime, we ``break'' the power spectrum at kb~0.2 (h-1 Mpc)-1 and fit a power law at k>kb. This allows for independent matching of the nonlinear behavior and an unbiased fit in the linear regime. The analysis assumes Gaussian fluctuations and errors and a linear relation between velocity and density. Tests using mock catalogs that properly simulate nonlinear effects demonstrate that this procedure results in a reduced bias and a better fit. We find for the Mark III and SFI data Ωm=0.32+/-0.06 and 0.37+/-0.09, respectively, with σ8Ω0.6m=0.49+/-0.06 and 0.63+/-0.08, in agreement with constraints from other data. The quoted 90% errors include distance errors and cosmic variance, for fixed values of the other parameters. The improvement in the likelihood due to the nonlinear correction is very significant for Mark III and moderately significant for SFI. When allowing deviations from ΛCDM, we find an indication for a wiggle in the power spectrum: an excess near k~0.05 (h-1 Mpc)-1 and a deficiency at k~0.1 (h-1 Mpc)-1, or a ``cold flow.'' This may be related to the wiggle seen in the power spectrum from redshift surveys and the second peak in the cosmic microwave background (CMB) anisotropy. A χ2 test applied to modes of a principal component analysis (PCA) shows that the nonlinear procedure improves the goodness of fit and reduces a spatial gradient that was of concern in the purely linear analysis. The PCA allows us to address spatial features of the data and to evaluate and fine-tune the theoretical and error models. It demonstrates in particular that the models used are appropriate for the cosmological parameter estimation performed. We address the potential for optimal data compression using PCA.

  1. Cost Estimation of Naval Ship Acquisition.

    DTIC Science & Technology

    1983-12-01

    one a 9-sub- system model , the other a single total cost model . The models were developed using the linear least squares regression tech- nique with...to Linear Statistical Models , McGraw-Hill, 1961. 11. Helmer, F. T., Bibliography on Pricing Methodology and Cost Estimating, Dept. of Economics and...SUPPI.EMSaTARY NOTES IS. KWRo" (Cowaft. en tever aide of ..aesep M’ Idab~t 6 Week ONNa.) Cost estimation; Acquisition; Parametric cost estimate; linear

  2. Coarse-grained description of cosmic structure from Szekeres models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sussman, Roberto A.; Gaspar, I. Delgado; Hidalgo, Juan Carlos, E-mail: sussman@nucleares.unam.mx, E-mail: ismael.delgadog@uaem.edu.mx, E-mail: hidalgo@fis.unam.mx

    2016-03-01

    We show that the full dynamical freedom of the well known Szekeres models allows for the description of elaborated 3-dimensional networks of cold dark matter structures (over-densities and/or density voids) undergoing ''pancake'' collapse. By reducing Einstein's field equations to a set of evolution equations, which themselves reduce in the linear limit to evolution equations for linear perturbations, we determine the dynamics of such structures, with the spatial comoving location of each structure uniquely specified by standard early Universe initial conditions. By means of a representative example we examine in detail the density contrast, the Hubble flow and peculiar velocities ofmore » structures that evolved, from linear initial data at the last scattering surface, to fully non-linear 10–20 Mpc scale configurations today. To motivate further research, we provide a qualitative discussion on the connection of Szekeres models with linear perturbations and the pancake collapse of the Zeldovich approximation. This type of structure modelling provides a coarse grained—but fully relativistic non-linear and non-perturbative —description of evolving large scale cosmic structures before their virialisation, and as such it has an enormous potential for applications in cosmological research.« less

  3. The effect of shear flow and the density gradient on the Weibel instability growth rate in the dense plasma

    NASA Astrophysics Data System (ADS)

    Amininasab, S.; Sadighi-Bonabi, R.; Khodadadi Azadboni, F.

    2018-02-01

    Shear stress effect has been often neglected in calculation of the Weibel instability growth rate in laser-plasma interactions. In the present work, the role of the shear stress in the Weibel instability growth rate in the dense plasma with density gradient is explored. By increasing the density gradient, the shear stress threshold is increasing and the range of the propagation angles of growing modes is limited. Therefore, by increasing steps of the density gradient plasma near the relativistic electron beam-emitting region, the Weibel instability occurs at a higher stress flow. Calculations show that the minimum value of the stress rate threshold for linear polarization is greater than that of circular polarization. The Wiebel instability growth rate for linear polarization is 18.3 times circular polarization. One sees that for increasing stress and density gradient effects, there are smaller maximal growth rates for the range of the propagation angles of growing modes /π 2 < θ m i n < π and /3 π 2 < θ m i n < 2 π in circular polarized plasma and for /k c ω p < 4 in linear polarized plasma. Therefore, the shear stress and density gradient tend to stabilize the Weibel instability for /k c ω p < 4 in linear polarized plasma. Also, the shear stress and density gradient tend to stabilize the Weibel instability for the range of the propagation angles of growing modes /π 2 < θ m i n < π and /3 π 2 < θ m i n < 2 π in circular polarized plasma.

  4. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  5. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  6. An at-site flood estimation method in the context of nonstationarity I. A simulation study

    NASA Astrophysics Data System (ADS)

    Gado, Tamer A.; Nguyen, Van-Thanh-Van

    2016-04-01

    The stationarity of annual flood peak records is the traditional assumption of flood frequency analysis. In some cases, however, as a result of land-use and/or climate change, this assumption is no longer valid. Therefore, new statistical models are needed to capture dynamically the change of probability density functions over time, in order to obtain reliable flood estimation. In this study, an innovative method for nonstationary flood frequency analysis was presented. Here, the new method is based on detrending the flood series and applying the L-moments along with the GEV distribution to the transformed ;stationary; series (hereafter, this is called the LM-NS). The LM-NS method was assessed through a comparative study with the maximum likelihood (ML) method for the nonstationary GEV model, as well as with the stationary (S) GEV model. The comparative study, based on Monte Carlo simulations, was carried out for three nonstationary GEV models: a linear dependence of the mean on time (GEV1), a quadratic dependence of the mean on time (GEV2), and linear dependence in both the mean and log standard deviation on time (GEV11). The simulation results indicated that the LM-NS method performs better than the ML method for most of the cases studied, whereas the stationary method provides the least accurate results. An additional advantage of the LM-NS method is to avoid the numerical problems (e.g., convergence problems) that may occur with the ML method when estimating parameters for small data samples.

  7. Long-term variations of the upper atmosphere parameters on Rome ionosonde observations and their interpretation

    NASA Astrophysics Data System (ADS)

    Perrone, Loredana; Mikhailov, Andrey; Cesaroni, Claudio; Alfonsi, Lucilla; Santis, Angelo De; Pezzopane, Michael; Scotto, Carlo

    2017-09-01

    A recently proposed self-consistent approach to the analysis of thermospheric and ionospheric long-term trends has been applied to Rome ionosonde summer noontime observations for the (1957-2015) period. This approach includes: (i) a method to extract ionospheric parameter long-term variations; (ii) a method to retrieve from observed foF1 neutral composition (O, O2, N2), exospheric temperature, Tex and the total solar EUV flux with λ < 1050 Å; and (iii) a combined analysis of the ionospheric and thermospheric parameter long-term variations using the theory of ionospheric F-layer formation. Atomic oxygen, [O] and [O]/[N2] ratio control foF1 and foF2 while neutral temperature, Tex controls hmF2 long-term variations. Noontime foF2 and foF1 long-term variations demonstrate a negative linear trend estimated over the (1962-2010) period which is mainly due to atomic oxygen decrease after ˜1990. A linear trend in (δhmF2)11y estimated over the (1962-2010) period is very small and insignificant reflecting the absence of any significant trend in neutral temperature. The retrieved neutral gas density, ρ atomic oxygen, [O] and exospheric temperature, Tex long-term variations are controlled by solar and geomagnetic activity, i.e. they have a natural origin. The residual trends estimated over the period of ˜5 solar cycles (1957-2015) are very small (<0.5% per decade) and statistically insignificant.

  8. New Formulation for the Viscosity of n-Butane

    NASA Astrophysics Data System (ADS)

    Herrmann, Sebastian; Vogel, Eckhard

    2018-03-01

    A new viscosity formulation for n-butane, based on the residual quantity concept, uses the reference equation of state by Bücker and Wagner [J. Phys. Chem. Ref. Data 35, 929 (2006)] and is valid in the fluid region from the triple point to 650 K and to 100 MPa. The contributions for the zero-density viscosity and for the initial-density dependence were separately developed, whereas those for the critical enhancement and for the higher-density terms were pretreated. All contributions were given as a function of the reciprocal reduced temperature τ, while the last two contributions were correlated as a function of τ and of the reduced density δ. The different contributions were based on specific primary data sets, whose evaluation and choice were discussed in detail. The final formulation incorporates 13 coefficients derived employing a state-of-the-art linear optimization algorithm. The viscosity at low pressures p ≤ 0.2 MPa is described with an expanded uncertainty of 0.5% (coverage factor k = 2) for temperatures 293 ≤ T/K ≤ 626. The expanded uncertainty in the vapor phase at subcritical temperatures T ≥ 298 K as well as in the supercritical thermodynamic region T ≤ 448 K at pressures p ≤ 30 MPa is estimated to be 1.5%. It is raised to 4.0% in regions where only less reliable primary data sets are available and to 6.0% in ranges without any primary data, but in which the equation of state is valid. A weakness of the reference equation of state in the near-critical region prevents estimation of the expanded uncertainty in this region. Viscosity tables for the new formulation are presented in Appendix B for the single-phase region, for the vapor-liquid phase boundary, and for the near-critical region.

  9. Cities, traffic, and CO 2: A multidecadal assessment of trends, drivers, and scaling relationships

    DOE PAGES

    Gately, Conor K.; Hutyra, Lucy R.; Sue Wing, Ian

    2015-04-06

    Emissions of CO 2 from road vehicles were 1.57 billion metric tons in 2012, accounting for 28% of US fossil fuel CO 2 emissions, but the spatial distributions of these emissions are highly uncertain. We develop a new emissions inventory, the Database of Road Transportation Emissions (DARTE), which estimates CO 2 emitted by US road transport at a resolution of 1 km annually for 1980-2012. DARTE reveals that urban areas are responsible for 80% of on-road emissions growth since 1980 and for 63% of total 2012 emissions. We observe nonlinearities between CO 2 emissions and population density at broad spatial/temporalmore » scales, with total on-road CO 2 increasing nonlinearly with population density, rapidly up to 1,650 persons per square kilometer and slowly thereafter. Per capita emissions decline as density rises, but at markedly varying rates depending on existing densities. Here, we make use of DARTE's bottom-up construction to highlight the biases associated with the common practice of using population as a linear proxy for disaggregating national- or state-scale emissions. Comparing DARTE with existing downscaled inventories, we find biases of 100% or more in the spatial distribution of urban and rural emissions, largely driven by mismatches between inventory downscaling proxies and the actual spatial patterns of vehicle activity at urban scales. Here, given cities' dual importance as sources of CO 2 and an emerging nexus of climate mitigation initiatives, high-resolution estimates such as DARTE are critical both for accurately quantifying surface carbon fluxes and for verifying the effectiveness of emissions mitigation efforts at urban scales.« less

  10. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data

    PubMed Central

    Broekhuis, Femke; Gopalaswamy, Arjun M.

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed ‘hotspots’ of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614

  11. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data.

    PubMed

    Broekhuis, Femke; Gopalaswamy, Arjun M

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  12. Does the choice of neighbourhood supermarket access measure influence associations with individual-level fruit and vegetable consumption? A case study from Glasgow.

    PubMed

    Thornton, Lukar E; Pearce, Jamie R; Macdonald, Laura; Lamb, Karen E; Ellaway, Anne

    2012-07-27

    Previous studies have provided mixed evidence with regards to associations between food store access and dietary outcomes. This study examines the most commonly applied measures of locational access to assess whether associations between supermarket access and fruit and vegetable consumption are affected by the choice of access measure and scale. Supermarket location data from Glasgow, UK (n = 119), and fruit and vegetable intake data from the 'Health and Well-Being' Survey (n = 1041) were used to compare various measures of locational access. These exposure variables included proximity estimates (with different points-of-origin used to vary levels of aggregation) and density measures using three approaches (Euclidean and road network buffers and Kernel density estimation) at distances ranging from 0.4 km to 5 km. Further analysis was conducted to assess the impact of using smaller buffer sizes for individuals who did not own a car. Associations between these multiple access measures and fruit and vegetable consumption were estimated using linear regression models. Levels of spatial aggregation did not impact on the proximity estimates. Counts of supermarkets within Euclidean buffers were associated with fruit and vegetable consumption at 1 km, 2 km and 3 km, and for our road network buffers at 2 km, 3 km, and 4 km. Kernel density estimates provided the strongest associations and were significant at a distance of 2 km, 3 km, 4 km and 5 km. Presence of a supermarket within 0.4 km of road network distance from where people lived was positively associated with fruit consumption amongst those without a car (coef. 0.657; s.e. 0.247; p0.008). The associations between locational access to supermarkets and individual-level dietary behaviour are sensitive to the method by which the food environment variable is captured. Care needs to be taken to ensure robust and conceptually appropriate measures of access are used and these should be grounded in a clear a priori reasoning.

  13. Does the choice of neighbourhood supermarket access measure influence associations with individual-level fruit and vegetable consumption? A case study from Glasgow

    PubMed Central

    2012-01-01

    Background Previous studies have provided mixed evidence with regards to associations between food store access and dietary outcomes. This study examines the most commonly applied measures of locational access to assess whether associations between supermarket access and fruit and vegetable consumption are affected by the choice of access measure and scale. Method Supermarket location data from Glasgow, UK (n = 119), and fruit and vegetable intake data from the ‘Health and Well-Being’ Survey (n = 1041) were used to compare various measures of locational access. These exposure variables included proximity estimates (with different points-of-origin used to vary levels of aggregation) and density measures using three approaches (Euclidean and road network buffers and Kernel density estimation) at distances ranging from 0.4 km to 5 km. Further analysis was conducted to assess the impact of using smaller buffer sizes for individuals who did not own a car. Associations between these multiple access measures and fruit and vegetable consumption were estimated using linear regression models. Results Levels of spatial aggregation did not impact on the proximity estimates. Counts of supermarkets within Euclidean buffers were associated with fruit and vegetable consumption at 1 km, 2 km and 3 km, and for our road network buffers at 2 km, 3 km, and 4 km. Kernel density estimates provided the strongest associations and were significant at a distance of 2 km, 3 km, 4 km and 5 km. Presence of a supermarket within 0.4 km of road network distance from where people lived was positively associated with fruit consumption amongst those without a car (coef. 0.657; s.e. 0.247; p0.008). Conclusions The associations between locational access to supermarkets and individual-level dietary behaviour are sensitive to the method by which the food environment variable is captured. Care needs to be taken to ensure robust and conceptually appropriate measures of access are used and these should be grounded in a clear a priori reasoning. PMID:22839742

  14. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse

  15. Estimating Small-Body Gravity Field from Shape Model and Navigation Data

    NASA Technical Reports Server (NTRS)

    Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam

    2008-01-01

    This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.

  16. Impact of density information on Rayleigh surface wave inversion results

    NASA Astrophysics Data System (ADS)

    Ivanov, Julian; Tsoflias, Georgios; Miller, Richard D.; Peterie, Shelby; Morton, Sarah; Xia, Jianghai

    2016-12-01

    We assessed the impact of density on the estimation of inverted shear-wave velocity (Vs) using the multi-channel analysis of surface waves (MASW) method. We considered the forward modeling theory, evaluated model sensitivity, and tested the effect of density information on the inversion of seismic data acquired in the Arctic. Theoretical review, numerical modeling and inversion of modeled and real data indicated that the density ratios between layers, not the actual density values, impact the determination of surface-wave phase velocities. Application on real data compared surface-wave inversion results using: a) constant density, the most common approach in practice, b) indirect density estimates derived from refraction compressional-wave velocity observations, and c) from direct density measurements in a borehole. The use of indirect density estimates reduced the final shear-wave velocity (Vs) results typically by 6-7% and the use of densities from a borehole reduced the final Vs estimates by 10-11% compared to those from assumed constant density. In addition to the improved absolute Vs accuracy, the resulting overall Vs changes were unevenly distributed laterally when viewed on a 2-D section leading to an overall Vs model structure that was more representative of the subsurface environment. It was observed that the use of constant density instead of increasing density with depth not only can lead to Vs overestimation but it can also create inaccurate model structures, such as a low-velocity layer. Thus, optimal Vs estimations can be best achieved using field estimates of subsurface density ratios.

  17. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  18. Use of spatial capture–recapture to estimate density of Andean bears in northern Ecuador

    USGS Publications Warehouse

    Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew

    2017-01-01

    The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial capture–recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2 within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.

  19. Influence of microarchitecture alterations on ultrasonic backscattering in an experimental simulation of bovine cancellous bone aging.

    PubMed

    Apostolopoulos, K N; Deligianni, D D

    2008-02-01

    An experimental model which can simulate physical changes that occur during aging was developed in order to evaluate the effects of change of mineral content and microstructure on ultrasonic properties of bovine cancellous bone. Timed immersion in hydrochloric acid was used to selectively alter the mineral content. Scanning electron microscopy and histological staining of the acid-treated trabeculae demonstrated a heterogeneous structure consisting of a mineralized core and a demineralized layer. The presence of organic matrix contributed very little to normalized broadband ultrasound attenuation (nBUA) and speed of sound. All three ultrasonic parameters, speed of sound, nBUA and backscatter coefficient, were sensitive to changes in apparent density of bovine cancellous bone. A two-component model utilizing a combination of two autocorrelation functions (a densely populated model and a spherical distribution) was used to approximate the backscatter coefficient. The predicted attenuation due to scattering constituted a significant part of the measured total attenuation (due to both scattering and absorption mechanisms) for bovine cancellous bone. Linear regression, performed between trabecular thickness values and estimated from the model correlation lengths, showed significant linear correlation, with R(2)=0.81 before and R(2)=0.80 after demineralization. The accuracy of estimation was found to increase with trabecular thickness.

  20. Partitioning of Aromatic Constituents into Water from Jet Fuels.

    PubMed

    Tien, Chien-Jung; Shu, Youn-Yuen; Ciou, Shih-Rong; Chen, Colin S

    2015-08-01

    A comprehensive study of the most commonly used jet fuels (i.e., Jet A-1 and JP-8) was performed to properly assess potential contamination of the subsurface environment from a leaking underground storage tank occurred in an airport. The objectives of this study were to evaluate the concentration ranges of the major components in the water-soluble fraction of jet fuels and to estimate the jet fuel-water partition coefficients (K fw) for target compounds using partitioning experiments and a polyparameter linear free-energy relationship (PP-LFER) approach. The average molecular weight of Jet A-1 and JP-8 was estimated to be 161 and 147 g/mole, respectively. The density of Jet A-1 and JP-8 was measured to be 786 and 780 g/L, respectively. The distribution of nonpolar target compounds between the fuel and water phases was described using a two-phase liquid-liquid equilibrium model. Models were derived using Raoult's law convention for the activity coefficients and the liquid solubility. The observed inverse, log-log linear dependence of the K fw values on the aqueous solubility were well predicted by assuming jet fuel to be an ideal solvent mixture. The experimental partition coefficients were generally well reproduced by PP-LFER.

  1. Molecular Static Third-Order Polarizabilities of Carbon-Cage Fullerene and Their Correlation with Three Geometric Properties: Symmetry, Aromaticity, and Size

    NASA Technical Reports Server (NTRS)

    Moore, C. E.; Cardelino, B. H.; Frazier, D. O.; Niles, J.; Wang, X.-Q.

    1998-01-01

    The static third-order polarizabilities (gamma) of C60, C70, five isomers of C78 and two isomers of C84 were analyzed in terms of three properties, from a geometric point of view: symmetry, aromaticity and size. The polarizability values were based on the finite field approximation using a semiempirical Hamiltonian (AM1) and applied to molecular structures obtained from density functional theory calculations. Symmetry was characterized by the molecular group order. The selection of 6-member rings as aromatic was determined from an analysis of bond lengths. Maximum interatomic distance and surface area were the parameters considered with respect to size. Based on triple linear regression analysis, it was found that the static linear polarizability (alpha) and gamma in these molecules respond differently to geometrical properties: alpha depends almost exclusively on surface area while gamma is affected by a combination of number of aromatic rings, length and group order, in decreasing importance. In the case of alpha, valence electron contributions provide the same information as all-electron estimates. For gamma, the best correlation coefficients are obtained when all-electron estimates are used and when the dependent parameter is ln(gamma) instead of gamma.

  2. Linear calculations of edge current driven kink modes with BOUT++ code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G. Q., E-mail: ligq@ipp.ac.cn; Xia, T. Y.; Lawrence Livermore National Laboratory, Livermore, California 94550

    This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linearmore » growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density.« less

  3. Seismic hazard and seismic risk assessment based on the unified scaling law for earthquakes: Himalayas and adjacent regions

    NASA Astrophysics Data System (ADS)

    Nekrasova, A. K.; Kossobokov, V. G.; Parvez, I. A.

    2015-03-01

    For the Himalayas and neighboring regions, the maps of seismic hazard and seismic risk are constructed with the use of the estimates for the parameters of the unified scaling law for earthquakes (USLE), in which the Gutenberg-Richter law for magnitude distribution of seismic events within a given area is applied in the modified version with allowance for linear dimensions of the area, namely, log N( M, L) = A + B (5 - M) + C log L, where N( M, L) is the expected annual number of the earthquakes with magnitude M in the area with linear dimension L. The spatial variations in the parameters A, B, and C for the Himalayas and adjacent regions are studied on two time intervals from 1965 to 2011 and from 1980 to 2011. The difference in A, B, and C between these two time intervals indicates that seismic activity experiences significant variations on a scale of a few decades. With a global consideration of the seismic belts of the Earth overall, the estimates of coefficient A, which determines the logarithm of the annual average frequency of the earthquakes with a magnitude of 5.0 and higher in the zone with a linear dimension of 1 degree of the Earth's meridian, differ by a factor of 30 and more and mainly fall in the interval from -1.1 to 0.5. The values of coefficient B, which describes the balance between the number of earthquakes with different magnitudes, gravitate to 0.9 and range from less than 0.6 to 1.1 and higher. The values of coefficient C, which estimates the fractal dimension of the local distribution of epicenters, vary from 0.5 to 1.4 and higher. In the Himalayas and neighboring regions, the USLE coefficients mainly fall in the intervals of -1.1 to 0.3 for A, 0.8 to 1.3 for B, and 1.0 to 1.4 for C. The calculations of the local value of the expected peak ground acceleration (PGA) from the maximal expected magnitude provided the necessary basis for mapping the seismic hazards in the studied region. When doing this, we used the local estimates of the magnitudes which, according to USLE, corresponded to the probability of exceedance 1% and 10% during 50 years or, if the reliable estimate is absent, the maximal magnitudes reported during the instrumental period. As a result, the seismic hazard maps for the Himalayas and the adjacent regions in terms of standard seismic zoning were constructed. Based on these calculations, in order to exemplify the method, we present a series of seismic risk maps taking into account the population density prone to seismic hazard and the dependence of the risk on the vulnerability as a function of population density.

  4. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  5. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  6. The Kummer tensor density in electrodynamics and in gravity

    NASA Astrophysics Data System (ADS)

    Baekler, Peter; Favaro, Alberto; Itin, Yakov; Hehl, Friedrich W.

    2014-10-01

    Guided by results in the premetric electrodynamics of local and linear media, we introduce on 4-dimensional spacetime the new abstract notion of a Kummer tensor density of rank four, K. This tensor density is, by definition, a cubic algebraic functional of a tensor density of rank four T, which is antisymmetric in its first two and its last two indices: T=-T=-T. Thus, K∼T3, see Eq. (46). (i) If T is identified with the electromagnetic response tensor of local and linear media, the Kummer tensor density encompasses the generalized Fresnel wave surfaces for propagating light. In the reversible case, the wave surfaces turn out to be Kummer surfaces as defined in algebraic geometry (Bateman 1910). (ii) If T is identified with the curvature tensor R of a Riemann-Cartan spacetime, then K∼R3 and, in the special case of general relativity, K reduces to the Kummer tensor of Zund (1969). This K is related to the principal null directions of the curvature. We discuss the properties of the general Kummer tensor density. In particular, we decompose K irreducibly under the 4-dimensional linear group GL(4,R) and, subsequently, under the Lorentz group SO(1,3).

  7. Impact of wild prey availability on livestock predation by snow leopards.

    PubMed

    Suryawanshi, Kulbhushansingh R; Redpath, Stephen M; Bhatnagar, Yash Veer; Ramakrishnan, Uma; Chaturvedi, Vaibhav; Smout, Sophie C; Mishra, Charudutt

    2017-06-01

    An increasing proportion of the world's poor is rearing livestock today, and the global livestock population is growing. Livestock predation by large carnivores and their retaliatory killing is becoming an economic and conservation concern. A common recommendation for carnivore conservation and for reducing predation on livestock is to increase wild prey populations based on the assumption that the carnivores will consume this alternative food. Livestock predation, however, could either reduce or intensify with increases in wild prey depending on prey choice and trends in carnivore abundance. We show that the extent of livestock predation by the endangered snow leopard Panthera uncia intensifies with increases in the density of wild ungulate prey, and subsequently stabilizes. We found that snow leopard density, estimated at seven sites, was a positive linear function of the density of wild ungulates-the preferred prey-and showed no discernible relationship with livestock density. We also found that modelled livestock predation increased with livestock density. Our results suggest that snow leopard conservation would benefit from an increase in wild ungulates, but that would intensify the problem of livestock predation for pastoralists. The potential benefits of increased wild prey abundance in reducing livestock predation can be overwhelmed by a resultant increase in snow leopard populations. Snow leopard conservation efforts aimed at facilitating increases in wild prey must be accompanied by greater assistance for better livestock protection and offsetting the economic damage caused by carnivores.

  8. Neighbourhood built environment characteristics associated with different types of physical activity in Canadian adults.

    PubMed

    McCormack, Gavin R

    2017-06-01

    The aim of this study was to estimate the associations between neighbourhood built environment characteristics and transportation walking (TW), recreational walking (RW), and moderate-intensity (MPA) and vigorous-intensity physical activity (VPA) in adults independent of sociodemographic characteristics and residential self-selection (i.e. the reasons related to physical activity associated with a person's choice of neighbourhood). In 2007 and 2008, 4423 Calgary adults completed land-based telephone interviews capturing physical activity, sociodemographic characteristics and reasons for residential self-selection. Using spatial data, we estimated population density, proportion of green space, path/cycleway length, business density, bus stop density, city-managed tree density, sidewalk length, park type mix and recreational destination mix within a 1.6 km street network distance from the participants' geolocated residential postal code. Generalized linear models estimated the associations between neighbourhood built environment characteristics and weekly neighbourhood-based physical activity participation (≥ 10 minutes/week; odds ratios [ORs]) and, among those who reported participation, duration of activity (unstandardized beta coefficients [B]). The sample included more women (59.7%) than men (40.3%) and the mean (standard deviation) age was 47.1 (15.6) years. TW participation was associated with intersection (OR = 1.11; 95% CI: 1.03 to 1.20) and business (OR = 1.52; 1.29 to 1.78) density, and sidewalk length (OR = 1.19; 1.09 to 1.29), while TW minutes was associated with business (B = 19.24 minutes/week; 11.28 to 27.20) and tree (B = 6.51; 2.29 to 10.72 minutes/week) density, and recreational destination mix (B = -8.88 minutes/ week; -12.49 to -5.28). RW participation was associated with path/cycleway length (OR = 1.17; 1.05 to 1.31). MPA participation was associated with recreational destination mix (OR = 1.09; 1.01 to 1.17) and sidewalk length (OR = 1.10; 1.02 to 1.19); however, MPA minutes was negatively associated with population density (B = -8.65 minutes/ week; -15.32 to -1.98). VPA participation was associated with sidewalk length (OR = 1.11; 1.02 to 1.20), path/cycleway length (OR = 1.12; 1.02 to 1.24) and proportion of neighbourhood green space (OR = 0.89; 0.82 to 0.98). VPA minutes was associated with tree density (B = 7.28 minutes/week; 0.39 to 14.17). Some neighbourhood built environment characteristics appear important for supporting physical activity participation while others may be more supportive of increasing physical activity duration. Modifications that increase the density of utilitarian destinations and the quantity of available sidewalks in established neighbourhoods could increase overall levels of neighbourhood-based physical activity.

  9. Exploring variable patterns of density-dependent larval settlement among corals with distinct and shared functional traits

    NASA Astrophysics Data System (ADS)

    Doropoulos, Christopher; Gómez-Lemos, Luis A.; Babcock, Russell C.

    2018-03-01

    Coral settlement is a key process for the recovery and maintenance of coral reefs, yet interspecific variations in density-dependent settlement are unknown. Settlement of the submassive Goniastrea retiformis and corymbose Acropora digitifera and A. millepora was quantified at densities ranging from 1 to 50 larvae per 20 mL from 110 to 216 h following spawning. Settlement patterns were distinct for each species. Goniastrea settlement was rapid and increased linearly with time, whereas both Acropora spp. hardly settled until crustose coralline algae was provided. Both Goniastrea and A. digitifera showed positive density-dependent settlement, but the relationship was exponential for Goniastrea but linear for A. digitifera. Settlement was highest but density independent in A. millepora. Our results suggest that larval density can have significant effects on settler replenishment, and highlight variability in density-dependent settlement among corals with distinct functional traits as well as those with similar functional forms.

  10. Deep sea animal density and size estimated using a Dual-frequency IDentification SONar (DIDSON) offshore the island of Hawaii

    NASA Astrophysics Data System (ADS)

    Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.

    2018-01-01

    Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.

  11. Weibull Modulus Estimated by the Non-linear Least Squares Method: A Solution to Deviation Occurring in Traditional Weibull Estimation

    NASA Astrophysics Data System (ADS)

    Li, T.; Griffiths, W. D.; Chen, J.

    2017-11-01

    The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws ( i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.

  12. Pattern recognition in the ALFALFA.70 and Sloan Digital Sky Surveys: a catalogue of ˜500 000 H I gas fraction estimates based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Teimoorinia, Hossen; Ellison, Sara L.; Patton, David R.

    2017-02-01

    The application of artificial neural networks (ANNs) for the estimation of H I gas mass fraction (M_{H I}/{{M}_{*}}) is investigated, based on a sample of 13 674 galaxies in the Sloan Digital Sky Survey (SDSS) with H I detections or upper limits from the Arecibo Legacy Fast Arecibo L-band Feed Array (ALFALFA). We show that, for an example set of fixed input parameters (g - r colour and I-band surface brightness), a multidimensional quadratic model yields M_{H I}/{{M}_{*}} scaling relations with a smaller scatter (0.22 dex) than traditional linear fits (0.32 dex), demonstrating that non-linear methods can lead to an improved performance over traditional approaches. A more extensive ANN analysis is performed using 15 galaxy parameters that capture variation in stellar mass, internal structure, environment and star formation. Of the 15 parameters investigated, we find that g - r colour, followed by stellar mass surface density, bulge fraction and specific star formation rate have the best connection with M_{H I}/{{M}_{*}}. By combining two control parameters, that indicate how well a given galaxy in SDSS is represented by the ALFALFA training set (PR) and the scatter in the training procedure (σfit), we develop a strategy for quantifying which SDSS galaxies our ANN can be adequately applied to, and the associated errors in the M_{H I}/{{M}_{*}} estimation. In contrast to previous works, our M_{H I}/{{M}_{*}} estimation has no systematic trend with galactic parameters such as M⋆, g - r and star formation rate. We present a catalogue of M_{H I}/{{M}_{*}} estimates for more than half a million galaxies in the SDSS, of which ˜150 000 galaxies have a secure selection parameter with average scatter in the M_{H I}/{{M}_{*}} estimation of 0.22 dex.

  13. Estimating Pressure Reactivity Using Noninvasive Doppler-Based Systolic Flow Index.

    PubMed

    Zeiler, Frederick A; Smielewski, Peter; Donnelly, Joseph; Czosnyka, Marek; Menon, David K; Ercole, Ari

    2018-04-05

    The study objective was to derive models that estimate the pressure reactivity index (PRx) using the noninvasive transcranial Doppler (TCD) based systolic flow index (Sx_a) and mean flow index (Mx_a), both based on mean arterial pressure, in traumatic brain injury (TBI). Using a retrospective database of 347 patients with TBI with intracranial pressure and TCD time series recordings, we derived PRx, Sx_a, and Mx_a. We first derived the autocorrelative structure of PRx based on: (A) autoregressive integrative moving average (ARIMA) modeling in representative patients, and (B) within sequential linear mixed effects (LME) models with various embedded ARIMA error structures for PRx for the entire population. Finally, we performed sequential LME models with embedded PRx ARIMA modeling to find the best model for estimating PRx using Sx_a and Mx_a. Model adequacy was assessed via normally distributed residual density. Model superiority was assessed via Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log likelihood (LL), and analysis of variance testing between models. The most appropriate ARIMA structure for PRx in this population was (2,0,2). This was applied in sequential LME modeling. Two models were superior (employing random effects in the independent variables and intercept): (A) PRx ∼ Sx_a, and (B) PRx ∼ Sx_a + Mx_a. Correlation between observed and estimated PRx with these two models was: (A) 0.794 (p < 0.0001, 95% confidence interval (CI) = 0.788-0.799), and (B) 0.814 (p < 0.0001, 95% CI = 0.809-0.819), with acceptable agreement on Bland-Altman analysis. Through using linear mixed effects modeling and accounting for the ARIMA structure of PRx, one can estimate PRx using noninvasive TCD-based indices. We have described our first attempts at such modeling and PRx estimation, establishing the strong link between two aspects of cerebral autoregulation: measures of cerebral blood flow and those of pulsatile cerebral blood volume. Further work is required to validate.

  14. Resolvability of regional density structure

    NASA Astrophysics Data System (ADS)

    Plonka, A.; Fichtner, A.

    2016-12-01

    Lateral density variations are the source of mass transport in the Earth at all scales, acting as drivers of convectivemotion. However, the density structure of the Earth remains largely unknown since classic seismic observables and gravityprovide only weak constraints with strong trade-offs. Current density models are therefore often based on velocity scaling,making strong assumptions on the origin of structural heterogeneities, which may not necessarily be correct. Our goal is to assessif 3D density structure may be resolvable with emerging full-waveform inversion techniques. We have previously quantified the impact of regional-scale crustal density structure on seismic waveforms with the conclusion that reasonably sized density variations within thecrust can leave a strong imprint on both travel times and amplitudes, and, while this can produce significant biases in velocity and Q estimates, the seismic waveform inversion for density may become feasible. In this study we performprincipal component analyses of sensitivity kernels for P velocity, S velocity, and density. This is intended to establish theextent to which these kernels are linearly independent, i.e. the extent to which the different parameters may be constrainedindependently. Since the density imprint we observe is not exclusively linked to travel times and amplitudes of specific phases,we consider waveform differences between complete seismograms. We test the method using a known smooth model of the crust and seismograms with clear Love and Rayleigh waves, showing that - as expected - the first principal kernel maximizes sensitivity to SH and SV velocity structure, respectively, and that the leakage between S velocity, P velocity and density parameter spaces is minimal in the chosen setup. Next, we apply the method to data from 81 events around the Iberian Penninsula, registered in total by 492 stations. The objective is to find a principal kernel which would maximize the sensitivity to density, potentially allowing for independent density resolution, and, as the final goal, for direct density inversion.

  15. Childhood factors associated with mammographic density in adult women.

    PubMed

    Lope, Virginia; Pérez-Gómez, Beatriz; Moreno, María Pilar; Vidal, Carmen; Salas-Trejo, Dolores; Ascunce, Nieves; Román, Isabel González; Sánchez-Contador, Carmen; Santamariña, María Carmen; Carrete, Jose Antonio Vázquez; Collado-García, Francisca; Pedraz-Pingarrón, Carmen; Ederra, María; Ruiz-Perales, Francisco; Peris, Mercé; Abad, Soledad; Cabanes, Anna; Pollán, Marina

    2011-12-01

    Growth and development factors could contribute to the development of breast cancer associated with an increase in mammographic density. This study examines the influence of certain childhood-related, socio-demographic and anthropometric variables on mammographic density in adult woman. The study covered 3574 women aged 45-68 years, participating in breast cancer-screening programmes in seven Spanish cities. Based on a craniocaudal mammogram, blind, anonymous measurement of mammographic density was made by a single radiologist, using Boyd's semiquantitative scale. Data associated with the early stages of life were obtained from a direct survey. Ordinal logistic regression and generalised linear models were employed to estimate the association between mammographic density and the variables covered by the questionnaire. Screening programme was introduced as a random effects term. Age, number of children, body mass index (BMI) and other childhood-related variables were used as adjustment variables, and stratified by menopausal status. A total of 811 women (23%) presented mammographic density of over 50%, and 5% of densities exceeded 75%. Our results show a greater prevalence of high mammographic density in women with low prepubertal weight (OR: 1.18; 95% CI: 1.02-1.36); marked prepubertal height (OR: 1.25; 95% CI: 0.97-1.60) and advanced age of their mothers at their birth (>39 years: OR: 1.28; 95% CI: 1.03-1.60); and a lower prevalence of high mammographic density in women with higher prepubertal weight, low birth weight and earlier menarche. The influence of these early-life factors may be explained by greater exposure to hormones and growth factors during the development of the breast gland, when breast tissue would be particularly susceptible to proliferative and carcinogenic stimulus.

  16. Composition and structure of the Chironomidae (Insecta: Diptera) community associated with bryophytes in a first-order stream in the Atlantic forest, Brazil.

    PubMed

    Rosa, B F J V; Dias-Silva, M V D; Alves, R G

    2013-02-01

    This study describes the structure of the Chironomidae community associated with bryophytes in a first-order stream located in a biological reserve of the Atlantic Forest, during two seasons. Samples of bryophytes adhered to rocks along a 100-m stretch of the stream were removed with a metal blade, and 200-mL pots were filled with the samples. The numerical density (individuals per gram of dry weight), Shannon's diversity index, Pielou's evenness index, the dominance index (DI), and estimated richness were calculated for each collection period (dry and rainy). Linear regression analysis was employed to test the existence of a correlation between rainfall and the individual's density and richness. The high numerical density and richness of Chironomidae taxa observed are probably related to the peculiar conditions of the bryophyte habitat. The retention of larvae during periods of higher rainfall contributed to the high density and richness of Chironomidae larvae. The rarefaction analysis showed higher richness in the rainy season related to the greater retention of food particles. The data from this study show that bryophytes provide stable habitats for the colonization by and refuge of Chironomidae larvae, mainly under conductions of faster water flow and higher precipitation.

  17. Kinetic-scale fluctuations resolved with the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission.

    NASA Astrophysics Data System (ADS)

    Gershman, D. J.; Figueroa-Vinas, A.; Dorelli, J.; Goldstein, M. L.; Shuster, J. R.; Avanov, L. A.; Boardsen, S. A.; Stawarz, J. E.; Schwartz, S. J.; Schiff, C.; Lavraud, B.; Saito, Y.; Paterson, W. R.; Giles, B. L.; Pollock, C. J.; Strangeway, R. J.; Russell, C. T.; Torbert, R. B.; Moore, T. E.; Burch, J. L.

    2017-12-01

    Measurements from the Fast Plasma Investigation (FPI) on NASA's Magnetospheric Multiscale (MMS) mission have enabled unprecedented analyses of kinetic-scale plasma physics. FPI regularly provides estimates of current density and pressure gradients of sufficient accuracy to evaluate the relative contribution of terms in plasma equations of motion. In addition, high-resolution three-dimensional velocity distribution functions of both ions and electrons provide new insights into kinetic-scale processes. As an example, for a monochromatic kinetic Alfven wave (KAW) we find non-zero, but out-of-phase parallel current density and electric field fluctuations, providing direct confirmation of the conservative energy exchange between the wave field and particles. In addition, we use fluctuations in current density and magnetic field to calculate the perpendicular and parallel wavelengths of the KAW. Furthermore, examination of the electron velocity distribution inside the KAW reveals a population of electrons non-linearly trapped in the kinetic-scale magnetic mirror formed between successive wave peaks. These electrons not only contribute to the wave's parallel electric field but also account for over half of the density fluctuations within the wave, supplying an unexpected mechanism for maintaining quasi-neutrality in a KAW. Finally, we demonstrate that the employed wave vector determination technique is also applicable to broadband fluctuations found in Earth's turbulent magnetosheath.

  18. Biotic and abiotic factors predicting the global distribution and population density of an invasive large mammal

    PubMed Central

    Lewis, Jesse S.; Farnsworth, Matthew L.; Burdett, Chris L.; Theobald, David M.; Gray, Miranda; Miller, Ryan S.

    2017-01-01

    Biotic and abiotic factors are increasingly acknowledged to synergistically shape broad-scale species distributions. However, the relative importance of biotic and abiotic factors in predicting species distributions is unclear. In particular, biotic factors, such as predation and vegetation, including those resulting from anthropogenic land-use change, are underrepresented in species distribution modeling, but could improve model predictions. Using generalized linear models and model selection techniques, we used 129 estimates of population density of wild pigs (Sus scrofa) from 5 continents to evaluate the relative importance, magnitude, and direction of biotic and abiotic factors in predicting population density of an invasive large mammal with a global distribution. Incorporating diverse biotic factors, including agriculture, vegetation cover, and large carnivore richness, into species distribution modeling substantially improved model fit and predictions. Abiotic factors, including precipitation and potential evapotranspiration, were also important predictors. The predictive map of population density revealed wide-ranging potential for an invasive large mammal to expand its distribution globally. This information can be used to proactively create conservation/management plans to control future invasions. Our study demonstrates that the ongoing paradigm shift, which recognizes that both biotic and abiotic factors shape species distributions across broad scales, can be advanced by incorporating diverse biotic factors. PMID:28276519

  19. A partially penalty immersed Crouzeix-Raviart finite element method for interface problems.

    PubMed

    An, Na; Yu, Xijun; Chen, Huanzhen; Huang, Chaobao; Liu, Zhongyan

    2017-01-01

    The elliptic equations with discontinuous coefficients are often used to describe the problems of the multiple materials or fluids with different densities or conductivities or diffusivities. In this paper we develop a partially penalty immersed finite element (PIFE) method on triangular grids for anisotropic flow models, in which the diffusion coefficient is a piecewise definite-positive matrix. The standard linear Crouzeix-Raviart type finite element space is used on non-interface elements and the piecewise linear Crouzeix-Raviart type immersed finite element (IFE) space is constructed on interface elements. The piecewise linear functions satisfying the interface jump conditions are uniquely determined by the integral averages on the edges as degrees of freedom. The PIFE scheme is given based on the symmetric, nonsymmetric or incomplete interior penalty discontinuous Galerkin formulation. The solvability of the method is proved and the optimal error estimates in the energy norm are obtained. Numerical experiments are presented to confirm our theoretical analysis and show that the newly developed PIFE method has optimal-order convergence in the [Formula: see text] norm as well. In addition, numerical examples also indicate that this method is valid for both the isotropic and the anisotropic elliptic interface problems.

  20. Are fractal dimensions of the spatial distribution of mineral deposits meaningful?

    USGS Publications Warehouse

    Raines, G.L.

    2008-01-01

    It has been proposed that the spatial distribution of mineral deposits is bifractal. An implication of this property is that the number of deposits in a permissive area is a function of the shape of the area. This is because the fractal density functions of deposits are dependent on the distance from known deposits. A long thin permissive area with most of the deposits in one end, such as the Alaskan porphyry permissive area, has a major portion of the area far from known deposits and consequently a low density of deposits associated with most of the permissive area. On the other hand, a more equi-dimensioned permissive area, such as the Arizona porphyry permissive area, has a more uniform density of deposits. Another implication of the fractal distribution is that the Poisson assumption typically used for estimating deposit numbers is invalid. Based on datasets of mineral deposits classified by type as inputs, the distributions of many different deposit types are found to have characteristically two fractal dimensions over separate non-overlapping spatial scales in the range of 5-1000 km. In particular, one typically observes a local dimension at spatial scales less than 30-60 km, and a regional dimension at larger spatial scales. The deposit type, geologic setting, and sample size influence the fractal dimensions. The consequence of the geologic setting can be diminished by using deposits classified by type. The crossover point between the two fractal domains is proportional to the median size of the deposit type. A plot of the crossover points for porphyry copper deposits from different geologic domains against median deposit sizes defines linear relationships and identifies regions that are significantly underexplored. Plots of the fractal dimension can also be used to define density functions from which the number of undiscovered deposits can be estimated. This density function is only dependent on the distribution of deposits and is independent of the definition of the permissive area. Density functions for porphyry copper deposits appear to be significantly different for regions in the Andes, Mexico, United States, and western Canada. Consequently, depending on which regional density function is used, quite different estimates of numbers of undiscovered deposits can be obtained. These fractal properties suggest that geologic studies based on mapping at scales of 1:24,000 to 1:100,000 may not recognize processes that are important in the formation of mineral deposits at scales larger than the crossover points at 30-60 km. ?? 2008 International Association for Mathematical Geology.

Top