Sample records for conditional density estimation

  1. Conditional Density Estimation with HMM Based Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Hu, Fasheng; Liu, Zhenqiu; Jia, Chunxin; Chen, Dechang

    Conditional density estimation is very important in financial engineer, risk management, and other engineering computing problem. However, most regression models have a latent assumption that the probability density is a Gaussian distribution, which is not necessarily true in many real life applications. In this paper, we give a framework to estimate or predict the conditional density mixture dynamically. Through combining the Input-Output HMM with SVM regression together and building a SVM model in each state of the HMM, we can estimate a conditional density mixture instead of a single gaussian. With each SVM in each node, this model can be applied for not only regression but classifications as well. We applied this model to denoise the ECG data. The proposed method has the potential to apply to other time series such as stock market return predictions.

  2. A hybrid pareto mixture for conditional asymmetric fat-tailed distributions.

    PubMed

    Carreau, Julie; Bengio, Yoshua

    2009-07-01

    In many cases, we observe some variables X that contain predictive information over a scalar variable of interest Y , with (X,Y) pairs observed in a training set. We can take advantage of this information to estimate the conditional density p(Y|X = x). In this paper, we propose a conditional mixture model with hybrid Pareto components to estimate p(Y|X = x). The hybrid Pareto is a Gaussian whose upper tail has been replaced by a generalized Pareto tail. A third parameter, in addition to the location and spread parameters of the Gaussian, controls the heaviness of the upper tail. Using the hybrid Pareto in a mixture model results in a nonparametric estimator that can adapt to multimodality, asymmetry, and heavy tails. A conditional density estimator is built by modeling the parameters of the mixture estimator as functions of X. We use a neural network to implement these functions. Such conditional density estimators have important applications in many domains such as finance and insurance. We show experimentally that this novel approach better models the conditional density in terms of likelihood, compared to competing algorithms: conditional mixture models with other types of components and a classical kernel-based nonparametric model.

  3. A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Izbicki, R.; Lee, A. B.

    2017-07-01

    Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.

  4. Spatial pattern corrections and sample sizes for forest density estimates of historical tree surveys

    Treesearch

    Brice B. Hanberry; Shawn Fraver; Hong S. He; Jian Yang; Dan C. Dey; Brian J. Palik

    2011-01-01

    The U.S. General Land Office land surveys document trees present during European settlement. However, use of these surveys for calculating historical forest density and other derived metrics is limited by uncertainty about the performance of plotless density estimators under a range of conditions. Therefore, we tested two plotless density estimators, developed by...

  5. Nonparametric estimation of plant density by the distance method

    USGS Publications Warehouse

    Patil, S.A.; Burnham, K.P.; Kovner, J.L.

    1979-01-01

    A relation between the plant density and the probability density function of the nearest neighbor distance (squared) from a random point is established under fairly broad conditions. Based upon this relationship, a nonparametric estimator for the plant density is developed and presented in terms of order statistics. Consistency and asymptotic normality of the estimator are discussed. An interval estimator for the density is obtained. The modifications of this estimator and its variance are given when the distribution is truncated. Simulation results are presented for regular, random and aggregated populations to illustrate the nonparametric estimator and its variance. A numerical example from field data is given. Merits and deficiencies of the estimator are discussed with regard to its robustness and variance.

  6. Estimating historical snag density in dry forests east of the Cascade Range

    Treesearch

    Richy J. Harrod; William L. Gaines; William E. Hartl; Ann. Camp

    1998-01-01

    Estimating snag densities in pre-European settlement landscapes (i.e., historical conditions) provides land managers with baseline information for comparing current snag densities. We propose a method for determining historical snag densities in the dry forests east of the Cascade Range. Basal area increase was calculated from tree ring measurements of old ponderosa...

  7. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  8. Novel Application of Density Estimation Techniques in Muon Ionization Cooling Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohayai, Tanaz Angelina; Snopok, Pavel; Neuffer, David

    The international Muon Ionization Cooling Experiment (MICE) aims to demonstrate muon beam ionization cooling for the first time and constitutes a key part of the R&D towards a future neutrino factory or muon collider. Beam cooling reduces the size of the phase space volume occupied by the beam. Non-parametric density estimation techniques allow very precise calculation of the muon beam phase-space density and its increase as a result of cooling. These density estimation techniques are investigated in this paper and applied in order to estimate the reduction in muon beam size in MICE under various conditions.

  9. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES.

    PubMed

    Han, Qiyang; Wellner, Jon A

    2016-01-01

    In this paper, we study the approximation and estimation of s -concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s -concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [ Ann. Statist. 38 (2010) 2998-3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q : if Q n → Q in the Wasserstein metric, then the projected densities converge in weighted L 1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s -concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s -concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s -concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s -concave.

  10. APPROXIMATION AND ESTIMATION OF s-CONCAVE DENSITIES VIA RÉNYI DIVERGENCES

    PubMed Central

    Han, Qiyang; Wellner, Jon A.

    2017-01-01

    In this paper, we study the approximation and estimation of s-concave densities via Rényi divergence. We first show that the approximation of a probability measure Q by an s-concave density exists and is unique via the procedure of minimizing a divergence functional proposed by [Ann. Statist. 38 (2010) 2998–3027] if and only if Q admits full-dimensional support and a first moment. We also show continuity of the divergence functional in Q: if Qn → Q in the Wasserstein metric, then the projected densities converge in weighted L1 metrics and uniformly on closed subsets of the continuity set of the limit. Moreover, directional derivatives of the projected densities also enjoy local uniform convergence. This contains both on-the-model and off-the-model situations, and entails strong consistency of the divergence estimator of an s-concave density under mild conditions. One interesting and important feature for the Rényi divergence estimator of an s-concave density is that the estimator is intrinsically related with the estimation of log-concave densities via maximum likelihood methods. In fact, we show that for d = 1 at least, the Rényi divergence estimators for s-concave densities converge to the maximum likelihood estimator of a log-concave density as s ↗ 0. The Rényi divergence estimator shares similar characterizations as the MLE for log-concave distributions, which allows us to develop pointwise asymptotic distribution theory assuming that the underlying density is s-concave. PMID:28966410

  11. Effects of an evidence-based computerized virtual clinician on low-density lipoprotein and non-high-density lipoprotein cholesterol in adults without cardiovascular disease: The Interactive Cholesterol Advisory Tool.

    PubMed

    Block, Robert C; Abdolahi, Amir; Niemiec, Christopher P; Rigby, C Scott; Williams, Geoffrey C

    2016-12-01

    There is a lack of research on the use of electronic tools that guide patients toward reducing their cardiovascular disease risk. We conducted a 9-month clinical trial in which participants who were at low (n = 100) and moderate (n = 23) cardiovascular disease risk-based on the National Cholesterol Education Program III's 10-year risk estimator-were randomized to usual care or to usual care plus use of an Interactive Cholesterol Advisory Tool during the first 8 weeks of the study. In the moderate-risk category, an interaction between treatment condition and Framingham risk estimate on low-density lipoprotein and non-high-density lipoprotein cholesterol was observed, such that participants in the virtual clinician treatment condition had a larger reduction in low-density lipoprotein and non-high-density lipoprotein cholesterol as their Framingham risk estimate increased. Perceptions of the Interactive Cholesterol Advisory Tool were positive. Evidence-based information about cardiovascular disease risk and its management was accessible to participants without major technical challenges. © The Author(s) 2015.

  12. A novel technique for real-time estimation of edge pedestal density gradients via reflectometer time delay data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, L., E-mail: zeng@fusion.gat.com; Doyle, E. J.; Rhodes, T. L.

    2016-11-15

    A new model-based technique for fast estimation of the pedestal electron density gradient has been developed. The technique uses ordinary mode polarization profile reflectometer time delay data and does not require direct profile inversion. Because of its simple data processing, the technique can be readily implemented via a Field-Programmable Gate Array, so as to provide a real-time density gradient estimate, suitable for use in plasma control systems such as envisioned for ITER, and possibly for DIII-D and Experimental Advanced Superconducting Tokamak. The method is based on a simple edge plasma model with a linear pedestal density gradient and low scrape-off-layermore » density. By measuring reflectometer time delays for three adjacent frequencies, the pedestal density gradient can be estimated analytically via the new approach. Using existing DIII-D profile reflectometer data, the estimated density gradients obtained from the new technique are found to be in good agreement with the actual density gradients for a number of dynamic DIII-D plasma conditions.« less

  13. Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery

    PubMed Central

    Liu, Shouyang; Baret, Fred; Andrieu, Bruno; Burger, Philippe; Hemmerlé, Matthieu

    2017-01-01

    Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages. PMID:28559901

  14. A state-space modeling approach to estimating canopy conductance and associated uncertainties from sap flux density data

    Treesearch

    David M. Bell; Eric J. Ward; A. Christopher Oishi; Ram Oren; Paul G. Flikkema; James S. Clark; David Whitehead

    2015-01-01

    Uncertainties in ecophysiological responses to environment, such as the impact of atmospheric and soil moisture conditions on plant water regulation, limit our ability to estimate key inputs for ecosystem models. Advanced statistical frameworks provide coherent methodologies for relating observed data, such as stem sap flux density, to unobserved processes, such as...

  15. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  16. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  17. Forecasting fish biomasses, densities, productions, and bioaccumulation potentials of mid-atlantic wadeable streams.

    PubMed

    Barber, M Craig; Rashleigh, Brenda; Cyterski, Michael J

    2016-01-01

    Regional fishery conditions of Mid-Atlantic wadeable streams in the eastern United States are estimated using the Bioaccumulation and Aquatic System Simulator (BASS) bioaccumulation and fish community model and data collected by the US Environmental Protection Agency's Environmental Monitoring and Assessment Program (EMAP). Average annual biomasses and population densities and annual productions are estimated for 352 randomly selected streams. Realized bioaccumulation factors (BAF) and biomagnification factors (BMF), which are dependent on these forecasted biomasses, population densities, and productions, are also estimated by assuming constant water exposures to methylmercury and tetra-, penta-, hexa-, and hepta-chlorinated biphenyls. Using observed biomasses, observed densities, and estimated annual productions of total fish from 3 regions assumed to support healthy fisheries as benchmarks (eastern Tennessee and Catskill Mountain trout streams and Ozark Mountains smallmouth bass streams), 58% of the region's wadeable streams are estimated to be in marginal or poor condition (i.e., not healthy). Using simulated BAFs and EMAP Hg fish concentrations, we also estimate that approximately 24% of the game fish and subsistence fishing species that are found in streams having detectable Hg concentrations would exceed an acceptable human consumption criterion of 0.185 μg/g wet wt. Importantly, such streams have been estimated to represent 78.2% to 84.4% of the Mid-Atlantic's wadeable stream lengths. Our results demonstrate how a dynamic simulation model can support regional assessment and trends analysis for fisheries. © 2015 SETAC.

  18. Mars surface radiation exposure for solar maximum conditions and 1989 solar proton events

    NASA Technical Reports Server (NTRS)

    Simonsen, Lisa C.; Nealy, John E.

    1992-01-01

    The Langley heavy-ion/nucleon transport code, HZETRN, and the high-energy nucleon transport code, BRYNTRN, are used to predict the propagation of galactic cosmic rays (GCR's) and solar flare protons through the carbon dioxide atmosphere of Mars. Particle fluences and the resulting doses are estimated on the surface of Mars for GCR's during solar maximum conditions and the Aug., Sep., and Oct. 1989 solar proton events. These results extend previously calculated surface estimates for GCR's at solar minimum conditions and the Feb. 1956, Nov. 1960, and Aug. 1972 solar proton events. Surface doses are estimated with both a low-density and a high-density carbon dioxide model of the atmosphere for altitudes of 0, 4, 8, and 12 km above the surface. A solar modulation function is incorporated to estimate the GCR dose variation between solar minimum and maximum conditions over the 11-year solar cycle. By using current Mars mission scenarios, doses to the skin, eye, and blood-forming organs are predicted for short- and long-duration stay times on the Martian surface throughout the solar cycle.

  19. Soil Bulk Density by Soil Type, Land Use and Data Source: Putting the Error in SOC Estimates

    NASA Astrophysics Data System (ADS)

    Wills, S. A.; Rossi, A.; Loecke, T.; Ramcharan, A. M.; Roecker, S.; Mishra, U.; Waltman, S.; Nave, L. E.; Williams, C. O.; Beaudette, D.; Libohova, Z.; Vasilas, L.

    2017-12-01

    An important part of SOC stock and pool assessment is the assessment, estimation, and application of bulk density estimates. The concept of bulk density is relatively simple (the mass of soil in a given volume), the specifics Bulk density can be difficult to measure in soils due to logistical and methodological constraints. While many estimates of SOC pools use legacy data in their estimates, few concerted efforts have been made to assess the process used to convert laboratory carbon concentration measurements and bulk density collection into volumetrically based SOC estimates. The methodologies used are particularly sensitive in wetlands and organic soils with high amounts of carbon and very low bulk densities. We will present an analysis across four database measurements: NCSS - the National Cooperative Soil Survey Characterization dataset, RaCA - the Rapid Carbon Assessment sample dataset, NWCA - the National Wetland Condition Assessment, and ISCN - the International soil Carbon Network. The relationship between bulk density and soil organic carbon will be evaluated by dataset and land use/land cover information. Prediction methods (both regression and machine learning) will be compared and contrasted across datasets and available input information. The assessment and application of bulk density, including modeling, aggregation and error propagation will be evaluated. Finally, recommendations will be made about both the use of new data in soil survey products (such as SSURGO) and the use of that information as legacy data in SOC pool estimates.

  20. Evolution of Dislocation Density During Tensile Deformation of BH220 Steel at Different Pre-strain Conditions

    NASA Astrophysics Data System (ADS)

    Seth, Prem Prakash; Das, A.; Bar, H. N.; Sivaprasad, S.; Basu, A.; Dutta, K.

    2015-07-01

    Tensile behavior of BH220 steel with different pre-strain conditions (2 and 8%) followed by bake hardening was studied at different strain rates (0.001 and 0.1/s). Dislocation densities of the deformed specimens were successfully estimated from x-ray diffraction profile analysis using the modified Williamson-Hall equation. The results indicate that other than 2% pre-strain the dislocation density increases with increase in pre-strain level as well as with strain rate. The decrease in the dislocation density in 2% pre-strain condition without any drop in strength value is attributed to the characteristic dislocation feature formed during pre-straining.

  1. Conditional random slope: A new approach for estimating individual child growth velocity in epidemiological research.

    PubMed

    Leung, Michael; Bassani, Diego G; Racine-Poon, Amy; Goldenberg, Anna; Ali, Syed Asad; Kang, Gagandeep; Premkumar, Prasanna S; Roth, Daniel E

    2017-09-10

    Conditioning child growth measures on baseline accounts for regression to the mean (RTM). Here, we present the "conditional random slope" (CRS) model, based on a linear-mixed effects model that incorporates a baseline-time interaction term that can accommodate multiple data points for a child while also directly accounting for RTM. In two birth cohorts, we applied five approaches to estimate child growth velocities from 0 to 12 months to assess the effect of increasing data density (number of measures per child) on the magnitude of RTM of unconditional estimates, and the correlation and concordance between the CRS and four alternative metrics. Further, we demonstrated the differential effect of the choice of velocity metric on the magnitude of the association between infant growth and stunting at 2 years. RTM was minimally attenuated by increasing data density for unconditional growth modeling approaches. CRS and classical conditional models gave nearly identical estimates with two measures per child. Compared to the CRS estimates, unconditional metrics had moderate correlation (r = 0.65-0.91), but poor agreement in the classification of infants with relatively slow growth (kappa = 0.38-0.78). Estimates of the velocity-stunting association were the same for CRS and classical conditional models but differed substantially between conditional versus unconditional metrics. The CRS can leverage the flexibility of linear mixed models while addressing RTM in longitudinal analyses. © 2017 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc.

  2. Influence of sampling window size and orientation on parafoveal cone packing density

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Ducoli, Pietro; Lombardo, Giuseppe

    2013-01-01

    We assessed the agreement between sampling windows of different size and orientation on packing density estimates in images of the parafoveal cone mosaic acquired using a flood-illumination adaptive optics retinal camera. Horizontal and vertical oriented sampling windows of different size (320x160 µm, 160x80 µm and 80x40 µm) were selected in two retinal locations along the horizontal meridian in one eye of ten subjects. At each location, cone density tended to decline with decreasing sampling area. Although the differences in cone density estimates were not statistically significant, Bland-Altman plots showed that the agreement between cone density estimated within the different sampling window conditions was moderate. The percentage of the preferred packing arrangements of cones by Voronoi tiles was slightly affected by window size and orientation. The results illustrated the high importance of specifying the size and orientation of the sampling window used to derive cone metric estimates to facilitate comparison of different studies. PMID:24009995

  3. Monte Carlo Approach for Estimating Density and Atomic Number From Dual-Energy Computed Tomography Images of Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Victor, Rodolfo A.; Prodanović, Maša.; Torres-Verdín, Carlos

    2017-12-01

    We develop a new Monte Carlo-based inversion method for estimating electron density and effective atomic number from 3-D dual-energy computed tomography (CT) core scans. The method accounts for uncertainties in X-ray attenuation coefficients resulting from the polychromatic nature of X-ray beam sources of medical and industrial scanners, in addition to delivering uncertainty estimates of inversion products. Estimation of electron density and effective atomic number from CT core scans enables direct deterministic or statistical correlations with salient rock properties for improved petrophysical evaluation; this condition is specifically important in media such as vuggy carbonates where CT resolution better captures core heterogeneity that dominates fluid flow properties. Verification tests of the inversion method performed on a set of highly heterogeneous carbonate cores yield very good agreement with in situ borehole measurements of density and photoelectric factor.

  4. Development of a foraging model framework to reliably estimate daily food consumption by young fishes

    USGS Publications Warehouse

    Deslauriers, David; Rosburg, Alex J.; Chipps, Steven R.

    2017-01-01

    We developed a foraging model for young fishes that incorporates handling and digestion rate to estimate daily food consumption. Feeding trials were used to quantify functional feeding response, satiation, and gut evacuation rate. Once parameterized, the foraging model was then applied to evaluate effects of prey type, prey density, water temperature, and fish size on daily feeding rate by age-0 (19–70 mm) pallid sturgeon (Scaphirhynchus albus). Prey consumption was positively related to prey density (for fish >30 mm) and water temperature, but negatively related to prey size and the presence of sand substrate. Model evaluation results revealed good agreement between observed estimates of daily consumption and those predicted by the model (r2 = 0.95). Model simulations showed that fish feeding on Chironomidae or Ephemeroptera larvae were able to gain mass, whereas fish feeding solely on zooplankton lost mass under most conditions. By accounting for satiation and digestive processes in addition to handling time and prey density, the model provides realistic estimates of daily food consumption that can prove useful for evaluating rearing conditions for age-0 fishes.

  5. Optimal estimation for the satellite attitude using star tracker measurements

    NASA Technical Reports Server (NTRS)

    Lo, J. T.-H.

    1986-01-01

    An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.

  6. Estimation of dislocations density and distribution of dislocations during ECAP-Conform process

    NASA Astrophysics Data System (ADS)

    Derakhshan, Jaber Fakhimi; Parsa, Mohammad Habibi; Ayati, Vahid; Jafarian, Hamidreza

    2018-01-01

    Dislocation density of coarse grain aluminum AA1100 alloy (140 µm) that was severely deformed by Equal Channel Angular Pressing-Conform (ECAP-Conform) are studied at various stages of the process by electron backscattering diffraction (EBSD) method. The geometrically necessary dislocations (GNDs) density and statistically stored dislocations (SSDs) densities were estimate. Then the total dislocations densities are calculated and the dislocation distributions are presented as the contour maps. Estimated average dislocations density for annealed of about 2×1012 m-2 increases to 4×1013 m-2 at the middle of the groove (135° from the entrance), and they reach to 6.4×1013 m-2 at the end of groove just before ECAP region. Calculated average dislocations density for one pass severely deformed Al sample reached to 6.2×1014 m-2. At micrometer scale the behavior of metals especially mechanical properties largely depend on the dislocation density and dislocation distribution. So, yield stresses at different conditions were estimated based on the calculated dislocation densities. Then estimated yield stresses were compared with experimental results and good agreements were found. Although grain size of material did not clearly change, yield stress shown intensive increase due to the development of cell structure. A considerable increase in dislocations density in this process is a good justification for forming subgrains and cell structures during process which it can be reason of increasing in yield stress.

  7. Flow Cytometry Pulse Width Data Enables Rapid and Sensitive Estimation of Biomass Dry Weight in the Microalgae Chlamydomonas reinhardtii and Chlorella vulgaris

    PubMed Central

    Chioccioli, Maurizio; Hankamer, Ben; Ross, Ian L.

    2014-01-01

    Dry weight biomass is an important parameter in algaculture. Direct measurement requires weighing milligram quantities of dried biomass, which is problematic for small volume systems containing few cells, such as laboratory studies and high throughput assays in microwell plates. In these cases indirect methods must be used, inducing measurement artefacts which vary in severity with the cell type and conditions employed. Here, we utilise flow cytometry pulse width data for the estimation of cell density and biomass, using Chlorella vulgaris and Chlamydomonas reinhardtii as model algae and compare it to optical density methods. Measurement of cell concentration by flow cytometry was shown to be more sensitive than optical density at 750 nm (OD750) for monitoring culture growth. However, neither cell concentration nor optical density correlates well to biomass when growth conditions vary. Compared to the growth of C. vulgaris in TAP (tris-acetate-phosphate) medium, cells grown in TAP + glucose displayed a slowed cell division rate and a 2-fold increased dry biomass accumulation compared to growth without glucose. This was accompanied by increased cellular volume. Laser scattering characteristics during flow cytometry were used to estimate cell diameters and it was shown that an empirical but nonlinear relationship could be shown between flow cytometric pulse width and dry weight biomass per cell. This relationship could be linearised by the use of hypertonic conditions (1 M NaCl) to dehydrate the cells, as shown by density gradient centrifugation. Flow cytometry for biomass estimation is easy to perform, sensitive and offers more comprehensive information than optical density measurements. In addition, periodic flow cytometry measurements can be used to calibrate OD750 measurements for both convenience and accuracy. This approach is particularly useful for small samples and where cellular characteristics, especially cell size, are expected to vary during growth. PMID:24832156

  8. Generation and decay dynamics of triplet excitons in Alq3 thin films under high-density excitation conditions.

    PubMed

    Watanabe, Sadayuki; Furube, Akihiro; Katoh, Ryuzi

    2006-08-31

    We studied the generation and decay dynamics of triplet excitons in tris-(8-hydroxyquinoline) aluminum (Alq3) thin films by using transient absorption spectroscopy. Absorption spectra of both singlet and triplet excitons in the film were identified by comparison with transient absorption spectra of the ligand molecule (8-hydroxyquinoline) itself and the excited triplet state in solution previously reported. By measuring the excitation light intensity dependence of the absorption, we found that exciton annihilation dominated under high-density excitation conditions. Annihilation rate constants were estimated to be gammaSS = (6 +/- 3) x 10(-11) cm3 s(-1) for single excitons and gammaTT = (4 +/- 2) x 10(-13) cm3 s(-1) for triplet excitons. From detailed analysis of the light intensity dependence of the quantum yield of triplet excitons under high-density conditions, triplet excitons were mainly generated through fission from highly excited singlet states populated by singlet-singlet exciton annihilation. We estimated that 30% of the highly excited states underwent fission.

  9. Estimation of the probability of success in petroleum exploration

    USGS Publications Warehouse

    Davis, J.C.

    1977-01-01

    A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum Publishing Corp.

  10. Influence of Sky Conditions on Estimation of Photosynthetic Photon Flux Density for Agricultural Ecosystem

    NASA Astrophysics Data System (ADS)

    Yamashita, M.; Yoshimura, M.

    2018-04-01

    Photosynthetic photon flux density (PPFD: µmol m-2 s-1) is indispensable for plant physiology processes in photosynthesis. However, PPFD is seldom measured, so that PPFD has been estimated by using solar radiation (SR: W m-2) measured in world wide. In method using SR, there are two steps: first to estimate photosynthetically active radiation (PAR: W m-2) by the fraction of PAR to SR (PF) and second: to convert PAR to PPFD using the ratio of quanta to energy (Q / E: µmol J-1). PF and Q/E usually have been used as the constant values, however, recent studies point out that PF and Q / E would not be constants under various sky conditions. In this study, we use the numeric data of sky-conditions factors such cloud cover, sun appearance/hiding and relative sky brightness derived from whole-sky image processing and examine the influences of sky-conditions factors on PF and Q / E of global and diffuse PAR. Furthermore, we discuss our results by comparing with the existing methods.

  11. Analytic model to estimate thermonuclear neutron yield in z-pinches using the magnetic Noh problem

    NASA Astrophysics Data System (ADS)

    Allen, Robert C.

    The objective was to build a model which could be used to estimate neutron yield in pulsed z-pinch experiments, benchmark future z-pinch simulation tools and to assist scaling for breakeven systems. To accomplish this, a recent solution to the magnetic Noh problem was utilized which incorporates a self-similar solution with cylindrical symmetry and azimuthal magnetic field (Velikovich, 2012). The self-similar solution provides the conditions needed to calculate the time dependent implosion dynamics from which batch burn is assumed and used to calculate neutron yield. The solution to the model is presented. The ion densities and time scales fix the initial mass and implosion velocity, providing estimates of the experimental results given specific initial conditions. Agreement is shown with experimental data (Coverdale, 2007). A parameter sweep was done to find the neutron yield, implosion velocity and gain for a range of densities and time scales for DD reactions and a curve fit was done to predict the scaling as a function of preshock conditions.

  12. Tigers and their prey: Predicting carnivore densities from prey abundance

    USGS Publications Warehouse

    Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Link, W.A.; Hines, J.E.

    2004-01-01

    The goal of ecology is to understand interactions that determine the distribution and abundance of organisms. In principle, ecologists should be able to identify a small number of limiting resources for a species of interest, estimate densities of these resources at different locations across the landscape, and then use these estimates to predict the density of the focal species at these locations. In practice, however, development of functional relationships between abundances of species and their resources has proven extremely difficult, and examples of such predictive ability are very rare. Ecological studies of prey requirements of tigers Panthera tigris led us to develop a simple mechanistic model for predicting tiger density as a function of prey density. We tested our model using data from a landscape-scale long-term (1995-2003) field study that estimated tiger and prey densities in 11 ecologically diverse sites across India. We used field techniques and analytical methods that specifically addressed sampling and detectability, two issues that frequently present problems in macroecological studies of animal populations. Estimated densities of ungulate prey ranged between 5.3 and 63.8 animals per km2. Estimated tiger densities (3.2-16.8 tigers per 100 km2) were reasonably consistent with model predictions. The results provide evidence of a functional relationship between abundances of large carnivores and their prey under a wide range of ecological conditions. In addition to generating important insights into carnivore ecology and conservation, the study provides a potentially useful model for the rigorous conduct of macroecological science.

  13. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy

    Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less

  14. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  15. Strong consistency of nonparametric Bayes density estimation on compact metric spaces with applications to specific manifolds

    PubMed Central

    Bhattacharya, Abhishek; Dunson, David B.

    2012-01-01

    This article considers a broad class of kernel mixture density models on compact metric spaces and manifolds. Following a Bayesian approach with a nonparametric prior on the location mixing distribution, sufficient conditions are obtained on the kernel, prior and the underlying space for strong posterior consistency at any continuous density. The prior is also allowed to depend on the sample size n and sufficient conditions are obtained for weak and strong consistency. These conditions are verified on compact Euclidean spaces using multivariate Gaussian kernels, on the hypersphere using a von Mises-Fisher kernel and on the planar shape space using complex Watson kernels. PMID:22984295

  16. Spatial analysis of geologic and hydrologic features relating to sinkhole occurrence in Jefferson County, West Virginia

    USGS Publications Warehouse

    Doctor, Daniel H.; Doctor, Katarina Z.

    2012-01-01

    In this study the influence of geologic features related to sinkhole susceptibility was analyzed and the results were mapped for the region of Jefferson County, West Virginia. A model of sinkhole density was constructed using Geographically Weighted Regression (GWR) that estimated the relations among discrete geologic or hydrologic features and sinkhole density at each sinkhole location. Nine conditioning factors on sinkhole occurrence were considered as independent variables: distance to faults, fold axes, fracture traces oriented along bedrock strike, fracture traces oriented across bedrock strike, ponds, streams, springs, quarries, and interpolated depth to groundwater. GWR model parameter estimates for each variable were evaluated for significance, and the results were mapped. The results provide visual insight into the influence of these variables on localized sinkhole density, and can be used to provide an objective means of weighting conditioning factors in models of sinkhole susceptibility or hazard risk.

  17. Estimating population density for disease risk assessment: The importance of understanding the area of influence of traps using wild pigs as an example.

    PubMed

    Davis, Amy J; Leland, Bruce; Bodenchuk, Michael; VerCauteren, Kurt C; Pepin, Kim M

    2017-06-01

    Population density is a key driver of disease dynamics in wildlife populations. Accurate disease risk assessment and determination of management impacts on wildlife populations requires an ability to estimate population density alongside management actions. A common management technique for controlling wildlife populations to monitor and mitigate disease transmission risk is trapping (e.g., box traps, corral traps, drop nets). Although abundance can be estimated from trapping actions using a variety of analytical approaches, inference is limited by the spatial extent to which a trap attracts animals on the landscape. If the "area of influence" were known, abundance estimates could be converted to densities. In addition to being an important predictor of contact rate and thus disease spread, density is more informative because it is comparable across sites of different sizes. The goal of our study is to demonstrate the importance of determining the area sampled by traps (area of influence) so that density can be estimated from management-based trapping designs which do not employ a trapping grid. To provide one example of how area of influence could be calculated alongside management, we conducted a small pilot study on wild pigs (Sus scrofa) using two removal methods 1) trapping followed by 2) aerial gunning, at three sites in northeast Texas in 2015. We estimated abundance from trapping data with a removal model. We calculated empirical densities as aerial counts divided by the area searched by air (based on aerial flight tracks). We inferred the area of influence of traps by assuming consistent densities across the larger spatial scale and then solving for area impacted by the traps. Based on our pilot study we estimated the area of influence for corral traps in late summer in Texas to be ∼8.6km 2 . Future work showing the effects of behavioral and environmental factors on area of influence will help mangers obtain estimates of density from management data, and determine conditions where trap-attraction is strongest. The ability to estimate density alongside population control activities will improve risk assessment and response operations against disease outbreaks. Published by Elsevier B.V.

  18. A new approach to evaluate gamma-ray measurements

    NASA Technical Reports Server (NTRS)

    Dejager, O. C.; Swanepoel, J. W. H.; Raubenheimer, B. C.; Vandervalt, D. J.

    1985-01-01

    Misunderstandings about the term random samples its implications may easily arise. Conditions under which the phases, obtained from arrival times, do not form a random sample and the dangers involved are discussed. Watson's U sup 2 test for uniformity is recommended for light curves with duty cycles larger than 10%. Under certain conditions, non-parametric density estimation may be used to determine estimates of the true light curve and its parameters.

  19. A comparison of selected parametric and imputation methods for estimating snag density and snag quality attributes

    USGS Publications Warehouse

    Eskelson, Bianca N.I.; Hagar, Joan; Temesgen, Hailemariam

    2012-01-01

    Snags (standing dead trees) are an essential structural component of forests. Because wildlife use of snags depends on size and decay stage, snag density estimation without any information about snag quality attributes is of little value for wildlife management decision makers. Little work has been done to develop models that allow multivariate estimation of snag density by snag quality class. Using climate, topography, Landsat TM data, stand age and forest type collected for 2356 forested Forest Inventory and Analysis plots in western Washington and western Oregon, we evaluated two multivariate techniques for their abilities to estimate density of snags by three decay classes. The density of live trees and snags in three decay classes (D1: recently dead, little decay; D2: decay, without top, some branches and bark missing; D3: extensive decay, missing bark and most branches) with diameter at breast height (DBH) ≥ 12.7 cm was estimated using a nonparametric random forest nearest neighbor imputation technique (RF) and a parametric two-stage model (QPORD), for which the number of trees per hectare was estimated with a Quasipoisson model in the first stage and the probability of belonging to a tree status class (live, D1, D2, D3) was estimated with an ordinal regression model in the second stage. The presence of large snags with DBH ≥ 50 cm was predicted using a logistic regression and RF imputation. Because of the more homogenous conditions on private forest lands, snag density by decay class was predicted with higher accuracies on private forest lands than on public lands, while presence of large snags was more accurately predicted on public lands, owing to the higher prevalence of large snags on public lands. RF outperformed the QPORD model in terms of percent accurate predictions, while QPORD provided smaller root mean square errors in predicting snag density by decay class. The logistic regression model achieved more accurate presence/absence classification of large snags than the RF imputation approach. Adjusting the decision threshold to account for unequal size for presence and absence classes is more straightforward for the logistic regression than for the RF imputation approach. Overall, model accuracies were poor in this study, which can be attributed to the poor predictive quality of the explanatory variables and the large range of forest types and geographic conditions observed in the data.

  20. Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bandyopadhyay, M., E-mail: mainak@iter-india.org; Sudhir, Dass, E-mail: dass.sudhir@iter-india.org; Chakraborty, A., E-mail: arunkc@iter-india.org

    2015-04-08

    To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveancemore » condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.« less

  1. Cost and performance model for redox flow batteries

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vilayanur; Crawford, Alasdair; Stephenson, David; Kim, Soowhan; Wang, Wei; Li, Bin; Coffey, Greg; Thomsen, Ed; Graff, Gordon; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2014-02-01

    A cost model is developed for all vanadium and iron-vanadium redox flow batteries. Electrochemical performance modeling is done to estimate stack performance at various power densities as a function of state of charge and operating conditions. This is supplemented with a shunt current model and a pumping loss model to estimate actual system efficiency. The operating parameters such as power density, flow rates and design parameters such as electrode aspect ratio and flow frame channel dimensions are adjusted to maximize efficiency and minimize capital costs. Detailed cost estimates are obtained from various vendors to calculate cost estimates for present, near-term and optimistic scenarios. The most cost-effective chemistries with optimum operating conditions for power or energy intensive applications are determined, providing a roadmap for battery management systems development for redox flow batteries. The main drivers for cost reduction for various chemistries are identified as a function of the energy to power ratio of the storage system. Levelized cost analysis further guide suitability of various chemistries for different applications.

  2. Posterior consistency in conditional distribution estimation

    PubMed Central

    Pati, Debdeep; Dunson, David B.; Tokdar, Surya T.

    2014-01-01

    A wide variety of priors have been proposed for nonparametric Bayesian estimation of conditional distributions, and there is a clear need for theorems providing conditions on the prior for large support, as well as posterior consistency. Estimation of an uncountable collection of conditional distributions across different regions of the predictor space is a challenging problem, which differs in some important ways from density and mean regression estimation problems. Defining various topologies on the space of conditional distributions, we provide sufficient conditions for posterior consistency focusing on a broad class of priors formulated as predictor-dependent mixtures of Gaussian kernels. This theory is illustrated by showing that the conditions are satisfied for a class of generalized stick-breaking process mixtures in which the stick-breaking lengths are monotone, differentiable functions of a continuous stochastic process. We also provide a set of sufficient conditions for the case where stick-breaking lengths are predictor independent, such as those arising from a fixed Dirichlet process prior. PMID:25067858

  3. Use of ordinary kriging and Gaussian conditional simulation to interpolate airborne fire radiative energy density estimates

    Treesearch

    C. Klauberg; A. T. Hudak; B. C. Bright; L. Boschetti; M. B. Dickinson; R. L. Kremens; C. A. Silva

    2018-01-01

    Fire radiative energy density (FRED, J m-2) integrated from fire radiative power density (FRPD, W m-2) observations of landscape-level fires can present an undersampling problem when collected from fixed-wing aircraft. In the present study, the aircraft made multiple passes over the fire at ~3 min intervals, thus failing to observe most of the FRPD emitted as the flame...

  4. Grassland bird densities in seral stages of mixed-grass prairie

    Treesearch

    Shawn C. Fritcher; Mark A. Rumble; Lester D. Flake

    2004-01-01

    Birds associated with prairie ecosystems are declining and the ecological condition (seral stage) of remaining grassland communities may be a factor. Livestock grazing intensity influences the seral stage of grassland communities and resource managers lack information to assess how grassland birds are affected by these changes. We estimated bird density, species...

  5. A Comprehensive Snow Density Model for Integrating Lidar-Derived Snow Depth Data into Spatial Snow Modeling

    NASA Astrophysics Data System (ADS)

    Marks, D. G.; Kormos, P.; Johnson, M.; Bormann, K. J.; Hedrick, A. R.; Havens, S.; Robertson, M.; Painter, T. H.

    2017-12-01

    Lidar-derived snow depths when combined with modeled or estimated snow density can provide reliable estimates of the distribution of SWE over large mountain areas. Application of this approach is transforming western snow hydrology. We present a comprehensive approach toward modeling bulk snow density that is reliable over a vast range of weather and snow conditions. The method is applied and evaluated over mountainous regions of California, Idaho, Oregon and Colorado in the western US. Simulated and measured snow density are compared at fourteen validation sites across the western US where measurements of snow mass (SWE) and depth are co-located. Fitting statistics for ten sites from three mountain catchments (two in Idaho, one in California) show an average Nash-Sutcliff model efficiency coefficient of 0.83, and mean bias of 4 kg m-3. Results illustrate issues associated with monitoring snow depth and SWE and show the effectiveness of the model, with a small mean bias across a range of snow and climate conditions in the west.

  6. The puzzling spectrum of HD 94509. Sounding out the extremes of Be shell star spectral morphology

    NASA Astrophysics Data System (ADS)

    Cowley, C. R.; Przybilla, N.; Hubrig, S.

    2015-06-01

    Context. The spectral features of HD 94509 are highly unusual, adding an extreme to the zoo of Be and shell stars. The shell dominates the spectrum, showing lines typical for spectral types mid-A to early-F, while the presence of a late/mid B-type central star is indicated by photospheric hydrogen line wings and helium lines. Numerous metallic absorption lines have broad wings but taper to narrow cores. They cannot be fit by Voigt profiles. Aims: We describe and illustrate unusual spectral features of this star, and make rough calculations to estimate physical conditions and abundances in the shell. Furthermore, the central star is characterized. Methods: We assume mean conditions for the shell. An electron density estimate is made from the Inglis-Teller formula. Excitation temperatures and column densities for Fe i and Fe ii are derived from curves of growth. The neutral H column density is estimated from high Paschen members. The column densities are compared with calculations made with the photoionization code Cloudy. Atmospheric parameters of the central star are constrained employing non-LTE spectrum synthesis. Results: Overall chemical abundances are close to solar. Column densities of the dominant ions of several elements, as well as excitation temperatures and the mean electron density are well accounted for by a simple model. Several features, including the degree of ionization, are less well described. Conclusions: HD 94509 is a Be star with a stable shell, close to the terminal-age main sequence. The dynamical state of the shell and the unusually shaped, but symmetric line profiles, require a separate study.

  7. Influence of fruit age of the Brazilian Green Dwarf coconut on the relationship between Aceria guerreronis population density and percentage of fruit damage.

    PubMed

    Sousa, André Silva Guimarães; Argolo, Poliane Sá; Gondim, Manoel Guedes Correa; de Moraes, Gilberto José; Oliveira, Anibal Ramadan

    2017-08-01

    The coconut mite, Aceria guerreronis Keifer (Acari: Eriophyidae), is one of the main coconut pests in the American, African and parts of the Asian continents, reaching densities of several thousand mites per fruit. Diagrammatic scales have been developed to standardize the estimation of the population densities of A. guerreronis according to the estimated percentage of damage, but these have not taken into account the possible effect of fruit age, although previous studies have already reported the variation in mite numbers with fruit age. The objective of this study was to re-construct the relation between damage and mite density at different fruit ages collected in an urban coconut plantation containing the green dwarf variety ranging from the beginning to nearly the end of the infestation, as regularly seen under field conditions in northeast Brazil, in order to improve future estimates with diagrammatic scales. The percentage of damage was estimated with two diagrammatic scales on a total of 470 fruits from 1 to 5 months old, from a field at Ilhéus, Bahia, Brazil, determining the respective number of mites on each fruit. The results suggested that in estimates with diagrammatic scales: (1) fruit age has a major effect on the estimation of A. guerreronis densities, (2) fruits of different ages should be analyzed separately, and (3) regular evaluation of infestation levels should be done preferably on fruits of about 3-4 months old, which show the highest densities.

  8. Evolution of the substructure of a novel 12% Cr steel under creep conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, Surya Deo, E-mail: surya.yadav@tugraz.at; Kalácska, Szilvia, E-mail: kalacska@metal.elte.hu; Dománková, Mária, E-mail: maria.domankova@stuba.sk

    2016-05-15

    In this work we study the microstruture evolution of a newly developed 12% Cr martensitic/ferritic steel in as-received condition and after creep at 650 °C under 130 MPa and 80 MPa. The microstructure is described as consisting of mobile dislocations, dipole dislocations, boundary dislocations, precipitates, lath boundaries, block boundaries, packet boundaries and prior austenitic grain boundaries. The material is characterized employing light optical microscopy (LOM), scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and electron backscatter diffraction (EBSD). TEM is used to characterize the dislocations (mobile + dipole) inside the subgrains and XRD measurements are used tomore » the characterize mobile dislocations. Based on the subgrain boundary misorientations obtained from EBSD measurements, the boundary dislocation density is estimated. The total dislocation density is estimated for the as-received and crept conditions adding the mobile, boundary and dipole dislocation densities. Additionally, the subgrain size is estimated from the EBSD measurements. In this publication we propose the use of three characterization techniques TEM, XRD and EBSD as necessary to characterize all type of dislocations and quantify the total dislocation densty in martensitic/ferritic steels. - Highlights: • Creep properties of a novel 12% Cr steel alloyed with Ta • Experimental characterization of different types of dislocations: mobile, dipole and boundary • Characterization and interpretation of the substructure evolution using unique combination of TEM, XRD and EBSD.« less

  9. Estimating the D-Region Ionospheric Electron Density Profile Using VLF Narrowband Transmitters

    NASA Astrophysics Data System (ADS)

    Gross, N. C.; Cohen, M.

    2016-12-01

    The D-region ionospheric electron density profile plays an important role in many applications, including long-range and transionospheric communications, and coupling between the lower atmosphere and the upper ionosphere occurs, and estimation of very low frequency (VLF) wave propagation within the earth-ionosphere waveguide. However, measuring the D-region ionospheric density profile has been a challenge. The D-region is about 60 to 90 [km] in altitude, which is higher than planes and balloons can fly but lower than satellites can orbit. Researchers have previously used VLF remote sensing techniques, from either narrowband transmitters or sferics, to estimate the density profile, but these estimations are typically during a short time frame and over a single propagation path.We report on an effort to construct estimates of the D-region ionospheric electron density profile over multiple narrowband transmission paths for long periods of time. Measurements from multiple transmitters at multiple receivers are analyzed concurrently to minimize false solutions and improve accuracy. Likewise, time averaging is used to remove short transient noise at the receivers. The cornerstone of the algorithm is an artificial neural network (ANN), where input values are the received amplitude and phase for the narrowband transmitters and the outputs are the commonly known h' and beta two parameter exponential electron density profile. Training data for the ANN is generated using the Navy's Long-Wavelength Propagation Capability (LWPC) model. Results show the algorithm performs well under smooth ionospheric conditions and when proper geometries for the transmitters and receivers are used.

  10. Digital photography for urban street tree crown conditions

    Treesearch

    Neil A. Clark; Sang-Mook Lee; William A. Bechtold; Gregory A. Reams

    2006-01-01

    Crown variables such as height, diameter, live crown ratio, dieback, transparency, and density are all collected as part of the overall crown assessment (USDA 2004). Transparency and density are related to the amount of foliage and thus the photosynthetic potential of the tree. These measurements are both currently based on visual estimates and have been shown to be...

  11. Estimation of the degree of hydrogen bonding between quinoline and water by ultraviolet-visible absorbance spectroscopy in sub- and supercritical water

    NASA Astrophysics Data System (ADS)

    Osada, Mitsumasa; Toyoshima, Katsunori; Mizutani, Takakazu; Minami, Kimitaka; Watanabe, Masaru; Adschiri, Tadafumi; Arai, Kunio

    2003-03-01

    UV-visible spectra of quinoline was measured in sub- and supercritical water (25 °C

  12. Mathematical models for nonparametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is develoepd which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(O) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y/r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(O/r).

  13. Constraints on rapidity-dependent initial conditions from charged-particle pseudorapidity densities and two-particle correlations

    NASA Astrophysics Data System (ADS)

    Ke, Weiyao; Moreland, J. Scott; Bernhard, Jonah E.; Bass, Steffen A.

    2017-10-01

    We study the initial three-dimensional spatial configuration of the quark-gluon plasma (QGP) produced in relativistic heavy-ion collisions using centrality and pseudorapidity-dependent measurements of the medium's charged particle density and two-particle correlations. A cumulant-generating function is first used to parametrize the rapidity dependence of local entropy deposition and extend arbitrary boost-invariant initial conditions to nonzero beam rapidities. The model is then compared to p +Pb and Pb + Pb charged-particle pseudorapidity densities and two-particle pseudorapidity correlations and systematically optimized using Bayesian parameter estimation to extract high-probability initial condition parameters. The optimized initial conditions are then compared to a number of experimental observables including the pseudorapidity-dependent anisotropic flows, event-plane decorrelations, and flow correlations. We find that the form of the initial local longitudinal entropy profile is well constrained by these experimental measurements.

  14. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  15. Honest Importance Sampling with Multiple Markov Chains

    PubMed Central

    Tan, Aixin; Doss, Hani; Hobert, James P.

    2017-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π1, is used to estimate an expectation with respect to another, π. The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π1 is replaced by a Harris ergodic Markov chain with invariant density π1, then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π1, …, πk, are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection. PMID:28701855

  16. Honest Importance Sampling with Multiple Markov Chains.

    PubMed

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable selection in linear regression, and for this application, importance sampling based on multiple chains enables an empirical Bayes approach to variable selection.

  17. Evaluation of Techniques Used to Estimate Cortical Feature Maps

    PubMed Central

    Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2011-01-01

    Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537

  18. Temporal variations of potential fecundity of southern blue whiting (Micromesistius australis australis) in the Southeast Pacific

    NASA Astrophysics Data System (ADS)

    Flores, Andrés; Wiff, Rodrigo; Díaz, Eduardo; Carvajal, Bernardita

    2017-08-01

    Fecundity is a key aspect of fish species reproductive biology because it relates directly to total egg production. Yet, despite such importance, fecundity estimates are lacking or scarce for several fish species. The gravimetric method is the most-used one to estimate fecundity by essentially scaling up the oocyte density to the ovary weight. It is a relatively simple and precise technique, but also time consuming because it requires counting all oocytes in an ovary subsample. The auto-diametric method, on the other hand, is relatively new for estimating fecundity, representing a rapid alternative, because it requires only an estimation of mean oocyte density from mean oocyte diameter. Using the extensive database available from commercial fishery and design surveys for southern blue whiting Micromesistius australis australis in the Southeast Pacific, we compared estimates of fecundity using both gravimetric and auto-diametric methods. Temporal variations in potential fecundity from the auto-diametric method were evaluated using generalised linear models considering predictors from maternal characteristics such as female size, condition factor, oocyte size, and gonadosomatic index. A global and time-invariant auto-diametric equation was evaluated using a simulation procedure based on non-parametric bootstrap. Results indicated there were not significant differences regarding fecundity estimates between the gravimetric and auto-diametric method (p > 0.05). Simulation showed the application of a global equation is unbiased and sufficiently precise to estimate time-invariant fecundity of this species. Temporal variations on fecundity were explained by maternal characteristic, revealing signals of fecundity down-regulation. We discuss how oocyte size and nutritional condition (measured as condition factor) are one of the important factors determining fecundity. We highlighted also the relevance of choosing the appropriate sampling period to conduct maturity studies and ensure precise estimates of fecundity of this species.

  19. Material properties of zooplankton and nekton from the California current

    NASA Astrophysics Data System (ADS)

    Becker, Kaylyn

    This study measured the material properties of zooplankton, Pacific hake (Merluccius productus), Humboldt squid (Dosidicus gigas), and two species of myctophids (Symbolophorus californiensis and Diaphus theta) collected from the California Current ecosystem. The density contrast (g) was measured for euphausiids, decapods (Sergestes similis), amphipods (Primno macropa, Phronima sp., and Hyperiid spp.), siphonophore bracts, chaetognaths, larval fish, crab megalopae, larval squid, and medusae. Morphometric data (length, width, and height) were collected for these taxa. Density contrasts varied within and between zooplankton taxa. The mean and standard deviation for euphausiid density contrast were 1.059 +/- 0.009. Relationships between zooplankton density contrast and morphometric measurements, geographic location, and environmental conditions were investigated. Site had a significant effect on euphausiid density contrast. Density contrasts of euphausiids collected in the same geographic area approximately 4-10 days apart were significantly higher (p < 0.001). Sound speed contrast (h) was measured for euphausiids and pelagic decapods (S. similis) and it varied between taxa. The mean and standard deviation for euphausiid sound speed were 1.019 +/- 0.009. Euphausiid mass was calculated from density measurements and volume, and a relationship between euphausiid mass and length was produced. We determined that euphausiid from volumes could be accurately estimated two dimensional measurements of animal body shape, and that biomass (or biovolume) could be accurately calculated from digital photographs of animals. Density contrast (g) was measured for zooplankton, pieces of hake flesh, myctophid flesh, and of the following Humboldt squid body parts: mantle, arms, tentacle, braincase, eyes, pen, and beak. The density contrasts varied within and between fish taxa, as well as among squid body parts. Effects of animal length and environmental conditions on nekton density contrast were investigated. The sound speed contrast (h) was measured for Pacific hake flesh, myctophid flesh, Humboldt squid mantle, and Humboldt squid braincase. Sound speed varied within and between nekton taxa. The material properties reported in this study can be used to improve target strength estimates from acoustic scattering models which would increase the accuracy of biomass estimates from acoustic surveys for these zooplankton and nekton.

  20. Area change reporting using the desktop FIADB

    Treesearch

    Patrick D. Miles; Mark H. Hansen

    2012-01-01

    The estimation of area change between two FIA inventories is complicated by the "mapping" of subplots. Subplots can be subdivided or mapped into forest and nonforest conditions, and forest conditions can be further mapped based on distinct changes in reserved status, owner group, forest type, stand-size class, regeneration status, and stand density. The...

  1. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  2. Mathematical models for non-parametric inferences from line transect data

    USGS Publications Warehouse

    Burnham, K.P.; Anderson, D.R.

    1976-01-01

    A general mathematical theory of line transects is developed which supplies a framework for nonparametric density estimation based on either right angle or sighting distances. The probability of observing a point given its right angle distance (y) from the line is generalized to an arbitrary function g(y). Given only that g(0) = 1, it is shown there are nonparametric approaches to density estimation using the observed right angle distances. The model is then generalized to include sighting distances (r). Let f(y I r) be the conditional distribution of right angle distance given sighting distance. It is shown that nonparametric estimation based only on sighting distances requires we know the transformation of r given by f(0 I r).

  3. The influence of the Ar/O2 ratio on the electron density and electron temperature in microwave discharges

    NASA Astrophysics Data System (ADS)

    Espinho, S.; Hofmann, S.; Palomares, J. M.; Nijdam, S.

    2017-10-01

    The aim of this work is to study the properties of Ar-O2 microwave driven surfatron plasmas as a function of the Ar/O2 ratio in the gas mixture. The key parameters are the plasma electron density and electron temperature, which are estimated with Thomson scattering (TS) for O2 contents up to 50% of the total gas flow. A sharp drop in the electron density from {10}20 {{{m}}}-3 to approximately {10}18 {{{m}}}-3 is estimated as the O2 content in the gas mixture is increased up to 15%. For percentages of O2 lower than 10%, the electron temperature is estimated to be about 2-3 times higher than in the case of a pure argon discharge in the same conditions ({T}{{e}}≈ 1 eV) and gradually decreases as the O2 percentage is raised to 50%. However, for O2 percentages above 30%, the scattering spectra become Raman dominated, resulting in large uncertainties in the estimated electron densities and temperatures. The influence of photo-detached electrons from negative ions caused by the typical TS laser fluences is also likely to contribute to the uncertainty in the measured electron densities for high O2 percentages. Moreover, the detection limit of the system is reached for percentages of O2 higher than 25%. Additionally, both the electron density and temperature of microwave discharges with large Ar/O2 ratios are more sensitive to gas pressure variations.

  4. Accounting for unsearched areas in estimating wind turbine-caused fatality

    USGS Publications Warehouse

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    With wind energy production expanding rapidly, concerns about turbine-induced bird and bat fatality have grown and the demand for accurate estimation of fatality is increasing. Estimation typically involves counting carcasses observed below turbines and adjusting counts by estimated detection probabilities. Three primary sources of imperfect detection are 1) carcasses fall into unsearched areas, 2) carcasses are removed or destroyed before sampling, and 3) carcasses present in the searched area are missed by observers. Search plots large enough to comprise 100% of turbine-induced fatality are expensive to search and may nonetheless contain areas unsearchable because of dangerous terrain or impenetrable brush. We evaluated models relating carcass density to distance from the turbine to estimate the proportion of carcasses expected to fall in searched areas and evaluated the statistical cost of restricting searches to areas near turbines where carcass density is highest and search conditions optimal. We compared 5 estimators differing in assumptions about the relationship of carcass density to distance from the turbine. We tested them on 6 different carcass dispersion scenarios at each of 3 sites under 2 different search regimes. We found that even simple distance-based carcass-density models were more effective at reducing bias than was a 5-fold expansion of the search area. Estimators incorporating fitted rather than assumed models were least biased, even under restricted searches. Accurate estimates of fatality at wind-power facilities will allow critical comparisons of rates among turbines, sites, and regions and contribute to our understanding of the potential environmental impact of this technology.

  5. Riparian vegetation and its water use during 1995 along the Mojave River, Southern California

    USGS Publications Warehouse

    Lines, Gregory C.; Bilhorn, Thomas W.

    1996-01-01

    The extent and areal density of riparian vegetation, including both phreatophytes and hydrophytes, were mapped along the 100-mile main stem of the Mojave River during 1995. Mapping was aided by vertical false-color infrared and low-level oblique photographs. However, positive identification of plant species and plant physiological stress required field examination. The consumptive use of ground water and surface water by different areal densities of riparian plant communities along the main stem of the Mojave River was estimated using water-use data from a select group of studies in the southwestern United States. In the Alto subarea of the Mojave basin management area, consumptive water use during 1995 by riparian vegetation was estimated to be about 5,000 acre-feet upstream from the Lower Narrows and about 6,000 acre-feet downstream in the transition zone. In the Centro and Baja subareas, consumptive water use was estimated to be about 3,000 acre-feet and 2,000 acre-feet, respectively, during 1995. Consumptive water use by riparian vegetation in the Afton area, downstream from the Baja subarea, was estimated to be about 600 acre-feet during 1995. Consumptive water use by riparian vegetation during 1995 is considered representative of "normal" hydrologic conditions along the Mojave River. Barring major changes in the areal extent and density of riparian vegetation, the 1995 consumptive-use estimates should be fairly representative of riparian vegetation water use during most years. Annual consumptive use, however, could vary from the 1995 estimates as much as plus or minus 50 percent because of extreme hydrologic conditions (periods of high water table following extraordinarily large runoff in the Mojave River or periods of extended drought).

  6. A bias-corrected estimator in multiple imputation for missing data.

    PubMed

    Tomita, Hiroaki; Fujisawa, Hironori; Henmi, Masayuki

    2018-05-29

    Multiple imputation (MI) is one of the most popular methods to deal with missing data, and its use has been rapidly increasing in medical studies. Although MI is rather appealing in practice since it is possible to use ordinary statistical methods for a complete data set once the missing values are fully imputed, the method of imputation is still problematic. If the missing values are imputed from some parametric model, the validity of imputation is not necessarily ensured, and the final estimate for a parameter of interest can be biased unless the parametric model is correctly specified. Nonparametric methods have been also proposed for MI, but it is not so straightforward as to produce imputation values from nonparametrically estimated distributions. In this paper, we propose a new method for MI to obtain a consistent (or asymptotically unbiased) final estimate even if the imputation model is misspecified. The key idea is to use an imputation model from which the imputation values are easily produced and to make a proper correction in the likelihood function after the imputation by using the density ratio between the imputation model and the true conditional density function for the missing variable as a weight. Although the conditional density must be nonparametrically estimated, it is not used for the imputation. The performance of our method is evaluated by both theory and simulation studies. A real data analysis is also conducted to illustrate our method by using the Duke Cardiac Catheterization Coronary Artery Disease Diagnostic Dataset. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Exposure time of oral rabies vaccine baits relative to baiting density and raccoon population density.

    PubMed

    Blackwell, Bradley F; Seamans, Thomas W; White, Randolph J; Patton, Zachary J; Bush, Rachel M; Cepek, Jonathan D

    2004-04-01

    Oral rabies vaccination (ORV) baiting programs for control of raccoon (Procyon lotor) rabies in the USA have been conducted or are in progress in eight states east of the Mississippi River. However, data specific to the relationship between raccoon population density and the minimum density of baits necessary to significantly elevate rabies immunity are few. We used the 22-km2 US National Aeronautics and Space Administration Plum Brook Station (PBS) in Erie County, Ohio, USA, to evaluate the period of exposure for placebo vaccine baits placed at a density of 75 baits/km2 relative to raccoon population density. Our objectives were to 1) estimate raccoon population density within the fragmented forest, old-field, and industrial landscape at PBS: and 2) quantify the time that placebo, Merial RABORAL V-RG vaccine baits were available to raccoons. From August through November 2002 we surveyed raccoon use of PBS along 19.3 km of paved-road transects by using a forward-looking infrared camera mounted inside a vehicle. We used Distance 3.5 software to calculate a probability of detection function by which we estimated raccoon population density from transect data. Estimated population density on PBS decreased from August (33.4 raccoons/km2) through November (13.6 raccoons/km2), yielding a monthly mean of 24.5 raccoons/km2. We also quantified exposure time for ORV baits placed by hand on five 1-km2 grids on PBS from September through October. An average 82.7% (SD = 4.6) of baits were removed within 1 wk of placement. Given raccoon population density, estimates of bait removal and sachet condition, and assuming 22.9% nontarget take, the baiting density of 75/ km2 yielded an average of 3.3 baits consumed per raccoon and the sachet perforated.

  8. Estimation of CO2 diffusion coefficient at 0-10 cm depth in undisturbed and tilled soils

    USDA-ARS?s Scientific Manuscript database

    Diffusion coefficients (D) of CO2 at 0 – 10 cm layers in undisturbed and tilled soil conditions were estimated using Penman, Millington-Quirk, Ridgwell et al. (1999), Troeh et al., and Moldrup et al. models. Soil bulk density and volumetric soil water content ('v) at 0 – 10 cm were measured on April...

  9. Anopheles atroparvus density modeling using MODIS NDVI in a former malarious area in Portugal.

    PubMed

    Lourenço, Pedro M; Sousa, Carla A; Seixas, Júlia; Lopes, Pedro; Novo, Maria T; Almeida, A Paulo G

    2011-12-01

    Malaria is dependent on environmental factors and considered as potentially re-emerging in temperate regions. Remote sensing data have been used successfully for monitoring environmental conditions that influence the patterns of such arthropod vector-borne diseases. Anopheles atroparvus density data were collected from 2002 to 2005, on a bimonthly basis, at three sites in a former malarial area in Southern Portugal. The development of the Remote Vector Model (RVM) was based upon two main variables: temperature and the Normalized Differential Vegetation Index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra satellite. Temperature influences the mosquito life cycle and affects its intra-annual prevalence, and MODIS NDVI was used as a proxy for suitable habitat conditions. Mosquito data were used for calibration and validation of the model. For areas with high mosquito density, the model validation demonstrated a Pearson correlation of 0.68 (p<0.05) and a modelling efficiency/Nash-Sutcliffe of 0.44 representing the model's ability to predict intra- and inter-annual vector density trends. RVM estimates the density of the former malarial vector An. atroparvus as a function of temperature and of MODIS NDVI. RVM is a satellite data-based assimilation algorithm that uses temperature fields to predict the intra- and inter-annual densities of this mosquito species using MODIS NDVI. RVM is a relevant tool for vector density estimation, contributing to the risk assessment of transmission of mosquito-borne diseases and can be part of the early warning system and contingency plans providing support to the decision making process of relevant authorities. © 2011 The Society for Vector Ecology.

  10. Body density and diving gas volume of the northern bottlenose whale (Hyperoodon ampullatus).

    PubMed

    Miller, Patrick; Narazaki, Tomoko; Isojunno, Saana; Aoki, Kagari; Smout, Sophie; Sato, Katsufumi

    2016-08-15

    Diving lung volume and tissue density, reflecting lipid store volume, are important physiological parameters that have only been estimated for a few breath-hold diving species. We fitted 12 northern bottlenose whales with data loggers that recorded depth, 3-axis acceleration and speed either with a fly-wheel or from change of depth corrected by pitch angle. We fitted measured values of the change in speed during 5 s descent and ascent glides to a hydrodynamic model of drag and buoyancy forces using a Bayesian estimation framework. The resulting estimate of diving gas volume was 27.4±4.2 (95% credible interval, CI) ml kg(-1), closely matching the measured lung capacity of the species. Dive-by-dive variation in gas volume did not correlate with dive depth or duration. Estimated body densities of individuals ranged from 1028.4 to 1033.9 kg m(-3) at the sea surface, indicating overall negative tissue buoyancy of this species in seawater. Body density estimates were highly precise with ±95% CI ranging from 0.1 to 0.4 kg m(-3), which would equate to a precision of <0.5% of lipid content based upon extrapolation from the elephant seal. Six whales tagged near Jan Mayen (Norway, 71°N) had lower body density and were closer to neutral buoyancy than six whales tagged in the Gully (Nova Scotia, Canada, 44°N), a difference that was consistent with the amount of gliding observed during ascent versus descent phases in these animals. Implementation of this approach using longer-duration tags could be used to track longitudinal changes in body density and lipid store body condition of free-ranging cetaceans. © 2016. Published by The Company of Biologists Ltd.

  11. Body density and diving gas volume of the northern bottlenose whale (Hyperoodon ampullatus)

    PubMed Central

    Miller, Patrick; Narazaki, Tomoko; Isojunno, Saana; Aoki, Kagari; Smout, Sophie; Sato, Katsufumi

    2016-01-01

    ABSTRACT Diving lung volume and tissue density, reflecting lipid store volume, are important physiological parameters that have only been estimated for a few breath-hold diving species. We fitted 12 northern bottlenose whales with data loggers that recorded depth, 3-axis acceleration and speed either with a fly-wheel or from change of depth corrected by pitch angle. We fitted measured values of the change in speed during 5 s descent and ascent glides to a hydrodynamic model of drag and buoyancy forces using a Bayesian estimation framework. The resulting estimate of diving gas volume was 27.4±4.2 (95% credible interval, CI) ml kg−1, closely matching the measured lung capacity of the species. Dive-by-dive variation in gas volume did not correlate with dive depth or duration. Estimated body densities of individuals ranged from 1028.4 to 1033.9 kg m−3 at the sea surface, indicating overall negative tissue buoyancy of this species in seawater. Body density estimates were highly precise with ±95% CI ranging from 0.1 to 0.4 kg m−3, which would equate to a precision of <0.5% of lipid content based upon extrapolation from the elephant seal. Six whales tagged near Jan Mayen (Norway, 71°N) had lower body density and were closer to neutral buoyancy than six whales tagged in the Gully (Nova Scotia, Canada, 44°N), a difference that was consistent with the amount of gliding observed during ascent versus descent phases in these animals. Implementation of this approach using longer-duration tags could be used to track longitudinal changes in body density and lipid store body condition of free-ranging cetaceans. PMID:27296044

  12. A hierarchical model for spatial capture-recapture data

    USGS Publications Warehouse

    Royle, J. Andrew; Young, K.V.

    2008-01-01

    Estimating density is a fundamental objective of many animal population studies. Application of methods for estimating population size from ostensibly closed populations is widespread, but ineffective for estimating absolute density because most populations are subject to short-term movements or so-called temporary emigration. This phenomenon invalidates the resulting estimates because the effective sample area is unknown. A number of methods involving the adjustment of estimates based on heuristic considerations are in widespread use. In this paper, a hierarchical model of spatially indexed capture recapture data is proposed for sampling based on area searches of spatial sample units subject to uniform sampling intensity. The hierarchical model contains explicit models for the distribution of individuals and their movements, in addition to an observation model that is conditional on the location of individuals during sampling. Bayesian analysis of the hierarchical model is achieved by the use of data augmentation, which allows for a straightforward implementation in the freely available software WinBUGS. We present results of a simulation study that was carried out to evaluate the operating characteristics of the Bayesian estimator under variable densities and movement patterns of individuals. An application of the model is presented for survey data on the flat-tailed horned lizard (Phrynosoma mcallii) in Arizona, USA.

  13. The use of copulas to practical estimation of multivariate stochastic differential equation mixed effects models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rupšys, P.

    A system of stochastic differential equations (SDE) with mixed-effects parameters and multivariate normal copula density function were used to develop tree height model for Scots pine trees in Lithuania. A two-step maximum likelihood parameter estimation method is used and computational guidelines are given. After fitting the conditional probability density functions to outside bark diameter at breast height, and total tree height, a bivariate normal copula distribution model was constructed. Predictions from the mixed-effects parameters SDE tree height model calculated during this research were compared to the regression tree height equations. The results are implemented in the symbolic computational language MAPLE.

  14. Measurements of surface-pressure fluctuations on the XB-70 airplane at local Mach numbers up to 2.45

    NASA Technical Reports Server (NTRS)

    Lewis, T. L.; Dods, J. B., Jr.; Hanly, R. D.

    1973-01-01

    Measurements of surface-pressure fluctuations were made at two locations on the XB-70 airplane for nine flight-test conditions encompassing a local Mach number range from 0.35 to 2.45. These measurements are presented in the form of estimated power spectral densities, coherence functions, and narrow-band-convection velocities. The estimated power spectral densities compared favorably with wind-tunnel data obtained by other experimenters. The coherence function and convection velocity data supported conclusions by other experimenters that low-frequency surface-pressure fluctuations consist of small-scale turbulence components with low convection velocity.

  15. Estimation of winter wheat canopy nitrogen density at different growth stages based on Multi-LUT approach

    NASA Astrophysics Data System (ADS)

    Li, Zhenhai; Li, Na; Li, Zhenhong; Wang, Jianwen; Liu, Chang

    2017-10-01

    Rapid real-time monitoring of wheat nitrogen (N) status is crucial for precision N management during wheat growth. In this study, Multi Lookup Table (Multi-LUT) approach based on the N-PROSAIL model parameters setting at different growth stages was constructed to estimating canopy N density (CND) in winter wheat. The results showed that the estimated CND was in line with with measured CND, with the determination coefficient (R2) and the corresponding root mean square error (RMSE) values of 0.80 and 1.16 g m-2, respectively. Time-consuming of one sample estimation was only 6 ms under the test machine with CPU configuration of Intel(R) Core(TM) i5-2430 @2.40GHz quad-core. These results confirmed the potential of using Multi-LUT approach for CND retrieval in winter wheat at different growth stages and under variables climatic conditions.

  16. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    NASA Astrophysics Data System (ADS)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  17. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function.

    PubMed

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2017-02-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  18. Effects of wildlife forestry on abundance of breeding birds in bottomland hardwood forests of Louisiana

    USGS Publications Warehouse

    Norris, Jennifer L.; Chamberlain, Michael J.; Twedt, Daniel J.

    2009-01-01

    Effects of silvicultural activities on birds are of increasing interest because of documented national declines in breeding bird populations for some species and the potential that these declines are in part due to changes in forest habitat. Silviculturally induced disturbances have been advocated as a means to achieve suitable forest conditions for priority wildlife species in bottomland hardwood forests. We evaluated how silvicultural activities on conservation lands in bottomland hardwood forests of Louisiana, USA, influenced species-specific densities of breeding birds. Our data were from independent studies, which used standardized point-count surveys for breeding birds in 124 bottomland hardwood forest stands on 12 management areas. We used Program DISTANCE 5.0, Release 2.0 (Thomas et al. 2006) to estimate density for 43 species with > 50 detections. For 36 of those species we compared density estimates among harvest regimes (individual selection, group selection, extensive harvest, and no harvest). We observed 10 species with similar densities in those harvest regimes compared with densities in stands not harvested. However, we observed 10 species that were negatively impacted by harvest with greater densities in stands not harvested, 9 species with greater densities in individual selection stands, 4 species with greater densities in group selection stands, and 4 species with greater densities in stands receiving an extensive harvest (e.g., > 40% canopy removal). Differences in intensity of harvest influenced densities of breeding birds. Moreover, community-wide avian conservation values of stands subjected to individual and group selection, and stands not harvested, were similar to each other and greater than that of stands subjected to extensive harvest that removed > 40% canopy cover. These results have implications for managers estimating breeding bird populations, in addition to predicting changes in bird communities as a result of prescribed and future forest management practices.

  19. Structural parameters of young star clusters: fractal analysis

    NASA Astrophysics Data System (ADS)

    Hetem, A.

    2017-07-01

    A unified view of star formation in the Universe demand detailed and in-depth studies of young star clusters. This work is related to our previous study of fractal statistics estimated for a sample of young stellar clusters (Gregorio-Hetem et al. 2015, MNRAS 448, 2504). The structural properties can lead to significant conclusions about the early stages of cluster formation: 1) virial conditions can be used to distinguish warm collapsed; 2) bound or unbound behaviour can lead to conclusions about expansion; and 3) fractal statistics are correlated to the dynamical evolution and age. The technique of error bars estimation most used in the literature is to adopt inferential methods (like bootstrap) to estimate deviation and variance, which are valid only for an artificially generated cluster. In this paper, we expanded the number of studied clusters, in order to enhance the investigation of the cluster properties and dynamic evolution. The structural parameters were compared with fractal statistics and reveal that the clusters radial density profile show a tendency of the mean separation of the stars increase with the average surface density. The sample can be divided into two groups showing different dynamic behaviour, but they have the same dynamic evolution, since the entire sample was revealed as being expanding objects, for which the substructures do not seem to have been completely erased. These results are in agreement with the simulations adopting low surface densities and supervirial conditions.

  20. A new empirical model to estimate hourly diffuse photosynthetic photon flux density

    NASA Astrophysics Data System (ADS)

    Foyo-Moreno, I.; Alados, I.; Alados-Arboledas, L.

    2018-05-01

    Knowledge of the photosynthetic photon flux density (Qp) is critical in different applications dealing with climate change, plant physiology, biomass production, and natural illumination in greenhouses. This is particularly true regarding its diffuse component (Qpd), which can enhance canopy light-use efficiency and thereby boost carbon uptake. Therefore, diffuse photosynthetic photon flux density is a key driving factor of ecosystem-productivity models. In this work, we propose a model to estimate this component, using a previous model to calculate Qp and furthermore divide it into its components. We have used measurements in urban Granada (southern Spain), of global solar radiation (Rs) to study relationships between the ratio Qpd/Rs with different parameters accounting for solar position, water-vapour absorption and sky conditions. The model performance has been validated with experimental measurements from sites having varied climatic conditions. The model provides acceptable results, with the mean bias error and root mean square error varying between - 0.3 and - 8.8% and between 9.6 and 20.4%, respectively. Direct measurements of this flux are very scarce so that modelling simulations are needed, this is particularly true regarding its diffuse component. We propose a new parameterization to estimate this component using only measured data of solar global irradiance, which facilitates its use for the construction of long-term data series of PAR in regions where continuous measurements of PAR are not yet performed.

  1. Atmospheric densities derived from CHAMP/STAR accelerometer observations

    NASA Astrophysics Data System (ADS)

    Bruinsma, S.; Tamagnan, D.; Biancale, R.

    2004-03-01

    The satellite CHAMP carries the accelerometer STAR in its payload and thanks to the GPS and SLR tracking systems accurate orbit positions can be computed. Total atmospheric density values can be retrieved from the STAR measurements, with an absolute uncertainty of 10-15%, under the condition that an accurate radiative force model, satellite macro-model, and STAR instrumental calibration parameters are applied, and that the upper-atmosphere winds are less than 150 m/ s. The STAR calibration parameters (i.e. a bias and a scale factor) of the tangential acceleration were accurately determined using an iterative method, which required the estimation of the gravity field coefficients in several iterations, the first result of which was the EIGEN-1S (Geophys. Res. Lett. 29 (14) (2002) 10.1029) gravity field solution. The procedure to derive atmospheric density values is as follows: (1) a reduced-dynamic CHAMP orbit is computed, the positions of which are used as pseudo-observations, for reference purposes; (2) a dynamic CHAMP orbit is fitted to the pseudo-observations using calibrated STAR measurements, which are saved in a data file containing all necessary information to derive density values; (3) the data file is used to compute density values at each orbit integration step, for which accurate terrestrial coordinates are available. This procedure was applied to 415 days of data over a total period of 21 months, yielding 1.2 million useful observations. The model predictions of DTM-2000 (EGS XXV General Assembly, Nice, France), DTM-94 (J. Geod. 72 (1998) 161) and MSIS-86 (J. Geophys. Res. 92 (1987) 4649) were evaluated by analysing the density ratios (i.e. "observed" to "computed" ratio) globally, and as functions of solar activity, geographical position and season. The global mean of the density ratios showed that the models underestimate density by 10-20%, with an rms of 16-20%. The binning as a function of local time revealed that the diurnal and semi-diurnal components are too strong in the DTM models, while all three models model the latitudinal gradient inaccurately. Using DTM-2000 as a priori, certain model coefficients were re-estimated using the STAR-derived densities, yielding the DTM-STAR test model. The mean and rms of the global density ratios of this preliminary model are 1.00 and 15%, respectively, while the tidal and latitudinal modelling errors become small. This test model is only representative of high solar activity conditions, while the seasonal effect is probably not estimated accurately due to correlation with the solar activity effect. At least one more year of data is required to separate the seasonal effect from the solar activity effect, and data taken under low solar activity conditions must also be assimilated to construct a model representative under all circumstances.

  2. Optimized Read/Write Conditions of PHB Memory,

    DTIC Science & Technology

    PHB memory has been a good candidate for a future ultra-high density memory for these ten years. This PHB memory is considered to realize the...diameter recording spot. But not so many researchers are working on PHB memory compared to the number of researchers wrestling with realization of higher...possible in such a high density recording in 1 -microns diameter spot. Therefore one of the most important research on PHB memory is the estimation of

  3. Broadcasting but not receiving: density dependence considerations for SETI signals

    NASA Astrophysics Data System (ADS)

    Smith, Reginald D.

    2009-04-01

    This paper develops a detailed quantitative model which uses the Drake equation and an assumption of an average maximum radio broadcasting distance by an communicative civilization. Using this basis, it estimates the minimum civilization density for contact between two civilizations to be probable in a given volume of space under certain conditions, the amount of time it would take for a first contact, and the question of whether reciprocal contact is possible.

  4. Upper limit set by causality on the tidal deformability of a neutron star

    NASA Astrophysics Data System (ADS)

    Van Oeveren, Eric D.; Friedman, John L.

    2017-04-01

    A principal goal of gravitational-wave astronomy is to constrain the neutron star equation of state (EOS) by measuring the tidal deformability of neutron stars. The tidally induced departure of the waveform from that of a point particle [or a spinless binary black hole (BBH)] increases with the stiffness of the EOS. We show that causality (the requirement that the speed of sound be less than the speed of light for a perfect fluid satisfying a one-parameter equation of state) places an upper bound on tidal deformability as a function of mass. Like the upper mass limit, the limit on deformability is obtained by using an EOS with vsound=c for high densities and matching to a low density (candidate) EOS at a matching density of order nuclear saturation density. We use these results and those of Lackey et al. [Phys. Rev. D 89, 043009 (2014), 10.1103/PhysRevD.89.043009] to estimate the resulting upper limit on the gravitational-wave phase shift of a black hole-neutron star (BHNS) binary relative to a BBH. Even for assumptions weak enough to allow a maximum mass of 4 M⊙ (a match at nuclear saturation density to an unusually stiff low-density candidate EOS), the upper limit on dimensionless tidal deformability is stringent. It leads to a still more stringent estimated upper limit on the maximum tidally induced phase shift prior to merger. We comment in an appendix on the relation among causality, the condition vsound

  5. Measurement of operator workload in an information processing task

    NASA Technical Reports Server (NTRS)

    Jenney, L. L.; Older, H. J.; Cameron, B. J.

    1972-01-01

    This was an experimental study to develop an improved methodology for measuring workload in an information processing task and to assess the effects of shift length and communication density (rate of information flow) on the ability to process and classify verbal messages. Each of twelve subjects was exposed to combinations of three shift lengths and two communication densities in a counterbalanced, repeated measurements experimental design. Results indicated no systematic variation in task performance measures or in other dependent measures as a function of shift length or communication density. This is attributed to the absence of a secondary loading task, an insufficiently taxing work schedule, and the lack of psychological stress. Subjective magnitude estimates of workload showed fatigue (and to a lesser degree, tension) to be a power function of shift length. Estimates of task difficulty and fatigue were initially lower but increased more sharply over time under low density than under high density conditions. An interpretation of findings and recommedations for furture research are included. This research has major implications to human workload problems in information processing of air traffic control verbal data.

  6. Density and Habitat Relationships of the Endemic White Mountain Fritillary (Boloria chariclea montinus) (Lepidoptera: Nymphalidae).

    PubMed

    McFarland, Kent P; Lloyd, John D; Hardy, Spencer P

    2017-06-04

    We conducted point counts in the alpine zone of the Presidential Range of the White Mountains, New Hampshire, USA, to estimate the distribution and density of the rare endemic White Mountain Fritillary ( Boloria chariclea montinus ). Incidence of occurrence and density of the endemic White Mountain Fritillary during surveys in 2012 and 2013 were greatest in the herbaceous-snowbank plant community. Densities at points in the heath-shrub-rush plant community were lower, but because this plant community is more widespread in the alpine zone, it likely supports the bulk of adult fritillaries. White Mountain Fritillary used cushion-tussock, the other alpine plant community suspected of providing habitat, only sparingly. Detectability of White Mountain Fritillaries varied as a consequence of weather conditions during the survey and among observers, suggesting that raw counts yield biased estimates of density and abundance. Point counts, commonly used to study and monitor populations of birds, were an effective means of sampling White Mountain Fritillary in the alpine environment where patches of habitat are small, irregularly shaped, and widely spaced, rendering line-transect methods inefficient and difficult to implement.

  7. Dynamic characterization of external and internal mass transport in heterotrophic biofilms from microsensors measurements.

    PubMed

    Guimerà, Xavier; Dorado, Antonio David; Bonsfills, Anna; Gabriel, Gemma; Gabriel, David; Gamisans, Xavier

    2016-10-01

    Knowledge of mass transport mechanisms in biofilm-based technologies such as biofilters is essential to improve bioreactors performance by preventing mass transport limitation. External and internal mass transport in biofilms was characterized in heterotrophic biofilms grown on a flat plate bioreactor. Mass transport resistance through the liquid-biofilm interphase and diffusion within biofilms were quantified by in situ measurements using microsensors with a high spatial resolution (<50 μm). Experimental conditions were selected using a mathematical procedure based on the Fisher Information Matrix to increase the reliability of experimental data and minimize confidence intervals of estimated mass transport coefficients. The sensitivity of external and internal mass transport resistances to flow conditions within the range of typical fluid velocities over biofilms (Reynolds numbers between 0.5 and 7) was assessed. Estimated external mass transfer coefficients at different liquid phase flow velocities showed discrepancies with studies considering laminar conditions in the diffusive boundary layer near the liquid-biofilm interphase. The correlation of effective diffusivity with flow velocities showed that the heterogeneous structure of biofilms defines the transport mechanisms inside biofilms. Internal mass transport was driven by diffusion through cell clusters and aggregates at Re below 2.8. Conversely, mass transport was driven by advection within pores, voids and water channels at Re above 5.6. Between both flow velocities, mass transport occurred by a combination of advection and diffusion. Effective diffusivities estimated at different biofilm densities showed a linear increase of mass transport resistance due to a porosity decrease up to biofilm densities of 50 g VSS·L(-1). Mass transport was strongly limited at higher biofilm densities. Internal mass transport results were used to propose an empirical correlation to assess the effective diffusivity within biofilms considering the influence of hydrodynamics and biofilm density. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Assimilation of thermospheric measurements for ionosphere-thermosphere state estimation

    NASA Astrophysics Data System (ADS)

    Miladinovich, Daniel S.; Datta-Barua, Seebany; Bust, Gary S.; Makela, Jonathan J.

    2016-12-01

    We develop a method that uses data assimilation to estimate ionospheric-thermospheric (IT) states during midlatitude nighttime storm conditions. The algorithm Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE) uses time-varying electron densities in the F region, derived primarily from total electron content data, to estimate two drivers of the IT: neutral winds and electric potential. A Kalman filter is used to update background models based on ingested plasma densities and neutral wind measurements. This is the first time a Kalman filtering technique is used with the EMPIRE algorithm and the first time neutral wind measurements from 630.0 nm Fabry-Perot interferometers (FPIs) are ingested to improve estimates of storm time ion drifts and neutral winds. The effects of assimilating remotely sensed neutral winds from FPI observations are studied by comparing results of ingesting: electron densities (N) only, N plus half the measurements from a single FPI, and then N plus all of the FPI data. While estimates of ion drifts and neutral winds based on N give estimates similar to the background models, this study's results show that ingestion of the FPI data can significantly change neutral wind and ion drift estimation away from background models. In particular, once neutral winds are ingested, estimated neutral winds agree more with validation wind data, and estimated ion drifts in the magnetic field-parallel direction are more sensitive to ingestion than the field-perpendicular zonal and meridional directions. Also, data assimilation with FPI measurements helps provide insight into the effects of contamination on 630.0 nm emissions experienced during geomagnetic storms.

  9. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  10. SU-G-JeP2-02: A Unifying Multi-Atlas Approach to Electron Density Mapping Using Multi-Parametric MRI for Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, S; Tianjin University, Tianjin; Hara, W

    Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less

  11. Shell stability and conditions analyzed using a new method of extracting shell areal density maps from spectrally resolved images of direct-drive inertial confinement fusion implosions

    DOE PAGES

    Johns, H. M.; Mancini, R. C.; Nagayama, T.; ...

    2016-01-25

    In warm target direct-drive inertial confinement fusion implosion experiments performed at the OMEGA laser facility, plastic micro-balloons doped with a titanium tracer layer in the shell and filled with deuterium gas were imploded using a low-adiabat shaped laser pulse. Continuum radiation emitted in the core is transmitted through the tracer layer and the resulting spectrum recorded with a gated multi-monochromatic x-ray imager (MMI). Titanium K-shell line absorption spectra observed in the data are due to transitions in L-shell titanium ions driven by the backlighting continuum. The MMI data consist of an array of spectrally resolved images of the implosion. Thesemore » 2-D space-resolved titanium spectral features constrain the plasma conditions and areal density of the titanium doped region of the shell. The MMI data were processed to obtain narrow-band images and space resolved spectra of titanium spectral features. Shell areal density maps, ρL(x,y), extracted using a new method using both narrow-band images and space resolved spectra are confirmed to be consistent within uncertainties. We report plasma conditions in the titanium-doped region of electron temperature (Te) = 400 ± 28 eV, electron number density (N e) = 8.5 × 10 24 ± 2.5 × 10 24 cm –3, and average areal density = 86 ± 7 mg/cm 2. Fourier analysis of areal density maps reveals shell modulations caused by hydrodynamic instability growth near the fuel-shell interface in the deceleration phase. We observe significant structure in modes l = 2–9, dominated by l = 2. We extract a target breakup fraction of 7.1 ± 1.5% from our Fourier analysis. Furthermore, a new method for estimating mix width is evaluated against existing literature and our target breakup fraction. We estimate a mix width of 10.5 ±1 μm.« less

  12. AgRISTARS: Foreign commodity production forecasting. The 1980 US/Canada wheat and barley exploratory experiment

    NASA Technical Reports Server (NTRS)

    Payne, R. W. (Principal Investigator)

    1981-01-01

    The crop identification procedures used performed were for spring small grains and are conducive to automation. The performance of the machine processing techniques shows a significant improvement over previously evaluated technology; however, the crop calendars require additional development and refinements prior to integration into automated area estimation technology. The integrated technology is capable of producing accurate and consistent spring small grains proportion estimates. Barley proportion estimation technology was not satisfactorily evaluated because LANDSAT sample segment data was not available for high density barley of primary importance in foreign regions and the low density segments examined were not judged to give indicative or unequvocal results. Generally, the spring small grains technology is ready for evaluation in a pilot experiment focusing on sensitivity analysis to a variety of agricultural and meteorological conditions representative of the global environment.

  13. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  14. Distribution and abundance of American eels in the White Oak River estuary, North Carolina

    USGS Publications Warehouse

    Hightower, J.E.; Nesnow, C.

    2006-01-01

    Apparent widespread declines in abundance of Anguilla rostrata (American eel) have reinforced the need for information regarding its life history and status. We used commercial eel pots and crab (peeler) pots to examine the distribution, condition, and abundance of American eels within the White Oak River estuary, NC, during summers of 2002-2003. Catch of American eels per overnight set was 0.35 (SE = 0.045) in 2002 and 0.49 (SE = 0.044) in 2003. There was not a significant linear relationship between catch per set and depth in 2002 (P = 0.31, depth range 0.9-3.4 m) or 2003 (P = 0.18, depth range 0.6-3.4 m). American eels from the White Oak River were in good condition, based on the slope of a length-weight relationship (3.41) compared to the median slope (3.15) from other systems. Estimates of population density from grid sampling in 2003 (300 mm and larger: 4.0-13.8 per ha) were similar to estimates for the Hudson River estuary, but substantially less than estimates from other (smaller) systems including tidal creeks within estuaries. Density estimates from coastal waters can be used with harvest records to examine whether overfishing has contributed to the recent apparent declines in American eel abundance.

  15. Notes on the birth-death prior with fossil calibrations for Bayesian estimation of species divergence times.

    PubMed

    Dos Reis, Mario

    2016-07-19

    Constructing a multi-dimensional prior on the times of divergence (the node ages) of species in a phylogeny is not a trivial task, in particular, if the prior density is the result of combining different sources of information such as a speciation process with fossil calibration densities. Yang & Rannala (2006 Mol. Biol. Evol 23, 212-226. (doi:10.1093/molbev/msj024)) laid out the general approach to combine the birth-death process with arbitrary fossil-based densities to construct a prior on divergence times. They achieved this by calculating the density of node ages without calibrations conditioned on the ages of the calibrated nodes. Here, I show that the conditional density obtained by Yang & Rannala is misspecified. The misspecified density can sometimes be quite strange-looking and can lead to unintentionally informative priors on node ages without fossil calibrations. I derive the correct density and provide a few illustrative examples. Calculation of the density involves a sum over a large set of labelled histories, and so obtaining the density in a computer program seems hard at the moment. A general algorithm that may provide a way forward is given.This article is part of the themed issue 'Dating species divergences using rocks and clocks'. © 2016 The Author(s).

  16. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  17. Change-in-ratio density estimator for feral pigs is less biased than closed mark-recapture estimates

    USGS Publications Warehouse

    Hanson, L.B.; Grand, J.B.; Mitchell, M.S.; Jolley, D.B.; Sparklin, B.D.; Ditchkoff, S.S.

    2008-01-01

    Closed-population capture-mark-recapture (CMR) methods can produce biased density estimates for species with low or heterogeneous detection probabilities. In an attempt to address such biases, we developed a density-estimation method based on the change in ratio (CIR) of survival between two populations where survival, calculated using an open-population CMR model, is known to differ. We used our method to estimate density for a feral pig (Sus scrofa) population on Fort Benning, Georgia, USA. To assess its validity, we compared it to an estimate of the minimum density of pigs known to be alive and two estimates based on closed-population CMR models. Comparison of the density estimates revealed that the CIR estimator produced a density estimate with low precision that was reasonable with respect to minimum known density. By contrast, density point estimates using the closed-population CMR models were less than the minimum known density, consistent with biases created by low and heterogeneous capture probabilities for species like feral pigs that may occur in low density or are difficult to capture. Our CIR density estimator may be useful for tracking broad-scale, long-term changes in species, such as large cats, for which closed CMR models are unlikely to work. ?? CSIRO 2008.

  18. AXIALLY ORIENTED SECTIONS OF NUMMULITIDS: A TOOL TO INTERPRET LARGER BENTHIC FORAMINIFERAL DEPOSITS

    PubMed Central

    Hohenegger, Johann; Briguglio, Antonino

    2015-01-01

    The “critical shear velocity” and “settling velocity” of foraminiferal shells are important parameters for determining hydrodynamic conditions during deposition of Nummulites banks. These can be estimated by determining the size, shape, and density of nummulitid shells examined in axial sections cut perpendicular to the bedding plane. Shell size and shape can be determined directly from the shell diameter and thickness, but density must be calculated indirectly from the thin section. Calculations using the half-tori method approximate shell densities by equalizing the chamber volume of each half whorl, based on the half whorl’s lumen area and its center of gravity. Results from this method yield the same lumen volumes produced empirically by micro-computed tomography. The derived hydrodynamic parameters help estimate the minimum flow velocities needed to entrain nummulitid tests and provide a potential tool to account for the nature of their accumulations. PMID:26166914

  19. AXIALLY ORIENTED SECTIONS OF NUMMULITIDS: A TOOL TO INTERPRET LARGER BENTHIC FORAMINIFERAL DEPOSITS.

    PubMed

    Hohenegger, Johann; Briguglio, Antonino

    2012-04-01

    The "critical shear velocity" and "settling velocity" of foraminiferal shells are important parameters for determining hydrodynamic conditions during deposition of Nummulites banks. These can be estimated by determining the size, shape, and density of nummulitid shells examined in axial sections cut perpendicular to the bedding plane. Shell size and shape can be determined directly from the shell diameter and thickness, but density must be calculated indirectly from the thin section. Calculations using the half-tori method approximate shell densities by equalizing the chamber volume of each half whorl, based on the half whorl's lumen area and its center of gravity. Results from this method yield the same lumen volumes produced empirically by micro-computed tomography. The derived hydrodynamic parameters help estimate the minimum flow velocities needed to entrain nummulitid tests and provide a potential tool to account for the nature of their accumulations.

  20. On the estimation of the current density in space plasmas: Multi- versus single-point techniques

    NASA Astrophysics Data System (ADS)

    Perri, Silvia; Valentini, Francesco; Sorriso-Valvo, Luca; Reda, Antonio; Malara, Francesco

    2017-06-01

    Thanks to multi-spacecraft mission, it has recently been possible to directly estimate the current density in space plasmas, by using magnetic field time series from four satellites flying in a quasi perfect tetrahedron configuration. The technique developed, commonly called ;curlometer; permits a good estimation of the current density when the magnetic field time series vary linearly in space. This approximation is generally valid for small spacecraft separation. The recent space missions Cluster and Magnetospheric Multiscale (MMS) have provided high resolution measurements with inter-spacecraft separation up to 100 km and 10 km, respectively. The former scale corresponds to the proton gyroradius/ion skin depth in ;typical; solar wind conditions, while the latter to sub-proton scale. However, some works have highlighted an underestimation of the current density via the curlometer technique with respect to the current computed directly from the velocity distribution functions, measured at sub-proton scales resolution with MMS. In this paper we explore the limit of the curlometer technique studying synthetic data sets associated to a cluster of four artificial satellites allowed to fly in a static turbulent field, spanning a wide range of relative separation. This study tries to address the relative importance of measuring plasma moments at very high resolution from a single spacecraft with respect to the multi-spacecraft missions in the current density evaluation.

  1. Assessing predation risks for small fish in a large river ecosystem between contrasting habitats and turbidity conditions

    USGS Publications Warehouse

    Dodrill, Michael J.; Yard, Mike; Pine, William E.

    2016-01-01

    This study examined predation risk for juvenile native fish between two riverine shoreline habitats, backwater and debris fan, across three discrete turbidity levels (low, intermediate, high) to understand environmental risks associated with habitat use in a section of the Colorado River in Grand Canyon, AZ. Inferences are particularly important to juvenile native fish, including the federally endangered humpback chub Gila cypha. This species uses a variety of habitats including backwaters which are often considered important rearing areas. Densities of two likely predators, adult rainbow trout Oncorhynchus mykiss and adult humpback chub, were estimated between habitats using binomial mixture models to examine whether higher predator density was associated with patterns of predation risk. Tethering experiments were used to quantify relative predation risk between habitats and turbidity conditions. Under low and intermediate turbidity conditions, debris fan habitat showed higher relative predation risk compared to backwaters. In both habitats the highest predation risk was observed during intermediate turbidity conditions. Density of likely predators did not significantly differ between these habitats. This information can help managers in Grand Canyon weigh flow policy options designed to increase backwater availability or extant turbidity conditions.

  2. Forecasting fish biomasses, densities, productions, and bioaccumulation potentials of Mid-Atlantic wadeable streams

    EPA Science Inventory

    Regional fishery conditions of Mid-Atlantic wadeable streams in the eastern United States are estimated using the BASS bioaccumulation and fish community model and data collected by the U.S. Environmental Protection Agency's Environmental Monitoring and Assessment Program (EMAP)....

  3. Radiative cooling efficiencies and predicted spectra of species of the Io plasma torus

    NASA Technical Reports Server (NTRS)

    Shemansky, D. E.

    1980-01-01

    Calculations of the physical condition of the Io plasma torus have been made based on the recent Voyager EUV observations. The calculations represent an assumed thin plasma collisional ionization equilibrium among the states within each species. The observations of the torus are all consistent with this condition. The major energy loss mechanism is radiative cooling in discrete transitions. Calculations of radiative cooling efficiencies of the identified species leads to an estimated energy loss rate of at least 1.5 x 10 to the 12th watts. The mean electron temperature and density of the plasma are estimated to be 100,000 K and 2100/cu cm. The estimated number densities of S III, S IV, and O III are roughly 95, 80, and 190-740/cu cm. Upper limits have been placed on a number of other species based on the first published Voyager EUV spectrum of the torus. The assumption that energy is supplied to the torus through injection of neutral particles from Io leads to the conclusion that ion loss rates are controlled by diffusion, and relative species abundances consequently are not controlled by collisional ionization equilibrium.

  4. Estimating the density of honeybee colonies across their natural range to fill the gap in pollinator decline censuses.

    PubMed

    Jaffé, Rodolfo; Dietemann, Vincent; Allsopp, Mike H; Costa, Cecilia; Crewe, Robin M; Dall'olio, Raffaele; DE LA Rúa, Pilar; El-Niweiri, Mogbel A A; Fries, Ingemar; Kezic, Nikola; Meusel, Michael S; Paxton, Robert J; Shaibi, Taher; Stolle, Eckart; Moritz, Robin F A

    2010-04-01

    Although pollinator declines are a global biodiversity threat, the demography of the western honeybee (Apis mellifera) has not been considered by conservationists because it is biased by the activity of beekeepers. To fill this gap in pollinator decline censuses and to provide a broad picture of the current status of honeybees across their natural range, we used microsatellite genetic markers to estimate colony densities and genetic diversity at different locations in Europe, Africa, and central Asia that had different patterns of land use. Genetic diversity and colony densities were highest in South Africa and lowest in Northern Europe and were correlated with mean annual temperature. Confounding factors not related to climate, however, are also likely to influence genetic diversity and colony densities in honeybee populations. Land use showed a significantly negative influence over genetic diversity and the density of honeybee colonies over all sampling locations. In Europe honeybees sampled in nature reserves had genetic diversity and colony densities similar to those sampled in agricultural landscapes, which suggests that the former are not wild but may have come from managed hives. Other results also support this idea: putative wild bees were rare in our European samples, and the mean estimated density of honeybee colonies on the continent closely resembled the reported mean number of managed hives. Current densities of European honeybee populations are in the same range as those found in the adverse climatic conditions of the Kalahari and Saharan deserts, which suggests that beekeeping activities do not compensate for the loss of wild colonies. Our findings highlight the importance of reconsidering the conservation status of honeybees in Europe and of regarding beekeeping not only as a profitable business for producing honey, but also as an essential component of biodiversity conservation.

  5. A technique for routinely updating the ITU-R database using radio occultation electron density profiles

    NASA Astrophysics Data System (ADS)

    Brunini, Claudio; Azpilicueta, Francisco; Nava, Bruno

    2013-09-01

    Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density,, and the height, . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve and values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between and elec/m for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height (2 %).

  6. Generalizing the Iterative Proportional Fitting Procedure.

    DTIC Science & Technology

    1980-04-01

    Csiszar gives conditions under which P (R) exists (it is always unique) and develops a geometry of I-divergence by using an analogue of Pythagoras ...8217 Theorem . As our goal is to study maximum likelihood estimation in contingency tables, we turn briefly to the problem of estimating a multinomial...envoke a result of Csiszir (due originally to Kullback (1959)), giving the form of the density of the I-projection. Csiszar’s Theorem 3.1, which we

  7. Snow conditions as an estimator of the breeding output in high-Arctic pink-footed geese Anser brachyrhynchus

    USGS Publications Warehouse

    Jensen, Gitte Høj; Madsen, Jesper; Johnson, Fred A.; Tamstorf, Mikkel P.

    2014-01-01

    The Svalbard-breeding population of pink-footed geese Anser brachyrhynchus has increased during the last decades and is giving rise to agricultural conflicts along their migration route, as well as causing grazing impacts on tundra vegetation. An adaptive flyway management plan has been implemented, which will be based on predictive population models including environmental variables expected to affect goose population development, such as weather conditions on the breeding grounds. A local study in Svalbard showed that snow cover prior to egg laying is a crucial factor for the reproductive output of pink-footed geese, and MODIS satellite images provided a useful estimator of snow cover. In this study, we up-scaled the analysis to the population level by examining various measures of snow conditions and compared them with the overall breeding success of the population as indexed by the proportion of juveniles in the autumn population. As explanatory variables, we explored MODIS images, satellite-based radar measures of onset of snow melt, winter NAO index, and the May temperature sum and May thaw days. To test for the presence of density dependence, we included the number of adults in the population. For 2000–2011, MODIS-derived snow cover (available since 2000) was the strongest indicator of breeding conditions. For 1981–2011, winter NAO and May thaw days had equal weight. Interestingly, there appears to have been a phase shift from density-dependent to density-independent reproduction, which is consistent with a hypothesis of released breeding potential due to the recent advancement of spring in Svalbard.

  8. Variations of E-region total electron content and electron density profiles over high latitudes during winter solstice 2007 using radio occultation measurements

    NASA Astrophysics Data System (ADS)

    Agrawal, Kajli

    The space weather phenomenon involves the Sun, interplanetary space and the Earth. Different space weather conditions have diverse effects on the various layers of the Earth's atmosphere Technological advancements have created a situation in which human civilization is not only dependent on resources from deep inside the Earth, but also on the upper atmosphere and outer space region. Therefore, it is essential to improve the understanding of the impacts of space weather conditions on the ionosphere. This research focuses on the variation of total electron content (TEC) and the electron density within the E-region of the ionosphere, which extends from 80-150 km above the surface of the Earth, using radio occultation measurements obtained by COSMIC satellites and using Ionospheric Data Assimilation Four-Dimensional algorithm (IDA4D) which is used to mitigate the effects of F-region in the E-region estimation (Bust, Garner, & Gaussiran, 2004). E-region TEC and the electron density estimation for geomagnetic latitude range of 45°--80°, geomagnetic longitude range of -180°--180° and 1800--0600 MLT (magnetic local time) are presented for two active and two quiet days during winter solstice 2007. Active and quiet days are identified based on the Kp index values. Some of the important findings are (1) E-region electron peak density is higher during active days than during quiet days, and (2) during both types of days, higher density values were found at the magnetic latitude of >60° early morning MLT. Prominent E-region features (TEC and electron density) were observed during most active days over the magnetic latitude range of 60°-70° at ~02:00 MLT.

  9. Ionospheric tomography by gradient-enhanced kriging with STEC measurements and ionosonde characteristics

    NASA Astrophysics Data System (ADS)

    Minkwitz, David; van den Boogaart, Karl Gerald; Gerzen, Tatjana; Hoque, Mainul; Hernández-Pajares, Manuel

    2016-11-01

    The estimation of the ionospheric electron density by kriging is based on the optimization of a parametric measurement covariance model. First, the extension of kriging with slant total electron content (STEC) measurements based on a spatial covariance to kriging with a spatial-temporal covariance model, assimilating STEC data of a sliding window, is presented. Secondly, a novel tomography approach by gradient-enhanced kriging (GEK) is developed. Beyond the ingestion of STEC measurements, GEK assimilates ionosonde characteristics, providing peak electron density measurements as well as gradient information. Both approaches deploy the 3-D electron density model NeQuick as a priori information and estimate the covariance parameter vector within a maximum likelihood estimation for the dedicated tomography time stamp. The methods are validated in the European region for two periods covering quiet and active ionospheric conditions. The kriging with spatial and spatial-temporal covariance model is analysed regarding its capability to reproduce STEC, differential STEC and foF2. Therefore, the estimates are compared to the NeQuick model results, the 2-D TEC maps of the International GNSS Service and the DLR's Ionospheric Monitoring and Prediction Center, and in the case of foF2 to two independent ionosonde stations. Moreover, simulated STEC and ionosonde measurements are used to investigate the electron density profiles estimated by the GEK in comparison to a kriging with STEC only. The results indicate a crucial improvement in the initial guess by the developed methods and point out the potential compensation for a bias in the peak height hmF2 by means of GEK.

  10. Physical characteristics of chamise as a wildland fuel

    Treesearch

    Clive M. Countryman; Charles W. Philpot

    1970-01-01

    Chamise shrubs in southern California were analyzed for the physical characteristics known to affect fire behavior, such as density, fuel loading, and fuel bed porosity. Considerable variation was found, but results are helpful in developing estimates of chamise fuel characteristics for fire control under field conditions.

  11. High-Areal-Density Fuel Assembly in Direct-Drive Cryogenic Implosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sangster, T.C.; Goncharov, V.N.; Radha, P.B.

    The first observation of ignition-relevant areal-density deuterium from implosions of capsules with cryogenic fuel layers at ignition-relevant adiabats is reported. The experiments were performed on the 60-beam, 30-kJUV OMEGA Laser System [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. Neutron-averaged areal densities of 202+-7 mg/cm^2 and 182+-7 mg/cm^2 (corresponding to estimated peak fuel densities in excess of 100 g/cm^3) were inferred using an 18-kJ direct-drive pulse designed to put the converging fuel on an adiabat of 2.5. These areal densities are in good agreement with the predictions of hydrodynamic simulations indicating that the fuel adiabat can be accuratelymore » controlled under ignition-relevant conditions.« less

  12. Population ecology of the mallard: II. Breeding habitat conditions, size of the breeding populations, and production indices

    USGS Publications Warehouse

    Pospahala, Richard S.; Anderson, David R.; Henny, Charles J.

    1974-01-01

    This report, the second in a series on a comprehensive analysis of mallard population data, provides information on mallard breeding habitat, the size and distribution of breeding populations, and indices to production. The information in this report is primarily the result of large-scale aerial surveys conducted during May and July, 1955-73. The history of the conflict in resource utilization between agriculturalists and wildlife conservation interests in the primary waterfowl breeding grounds is reviewed. The numbers of ponds present during the breeding season and the midsummer period and the effects of precipitation and temperature on the number of ponds present are analyzed in detail. No significant cycles in precipitation were detected and it appears that precipitation is primarily influenced by substantial seasonal and random components. Annual estimates (1955-73) of the number of mallards in surveyed and unsurveyed breeding areas provided estimates of the size and geographic distribution of breeding mallards in North America. The estimated size of the mallard breeding population in North America has ranged from a high of 14.4 million in 1958 to a low of 7.1 million in 1965. Generally, the mallard breeding population began to decline after the 1958 peak until 1962, and remained below 10 million birds until 1970. The decline and subsequent low level of the mallard population between 1959 and 1969 .generally coincided with a period of poor habitat conditions on the major breeding grounds. The density of mallards was highest in the Prairie-Parkland Area with an average of nearly 19.2 birds per square mile. The proportion of the continental mallard breeding population in the Prairie-Parkland Area ranged from 30% in 1962 to a high of 600/0 in 1956. The geographic distribution of breeding mallards throughout North America was significantly related to the number of May ponds in the Prairie-Parkland Area. Estimates of midsummer habitat conditions and indices to production from the July Production Survey were studied in detail. Several indices relating to production showed marked declines from west to east in the Prairie-Parkland Area, these are: (1) density of breeding mallards (per square mile and per May pond), (2) brood density (per square mile and per July pond), (3) average brood size (all species combined), and (4) brood survival from class II to class III. An index to late nesting and renesting efforts was highest during years when midsummer water conditions were good. Production rates of many ducks breeding in North America appear to be regulated by both density-dependent and density-independent factors. Spacing of birds in the Prairie-Parkland Area appeared to be a key factor in the density-dependent regulation of the population. The spacing mechanism, in conjunction with habitat conditions, influenced some birds to overfly the primary breeding grounds into less favorable habitats to the north and northwest where the production rate may be suppressed. The production rate of waterfowl in the Prairie Parkland Area seems to be independent of density (after emigration has taken place) because the production index appears to be a linear function of the number of breeding birds in the area. Similarly, the production rate of waterfowl in northern Saskatchewan and northern Manitoba appeared to be independent of density. Production indices in these northern areas appear to be a linear function of the size of the breeding population. Thus, the density and distribution of breeding ducks is probably regulated through a spacing mechanism that is at least partially dependent on measurable environmental factors. The result is a density-dependent process operating to ultimately effect the production and production rate of breeding ducks on a continent-wide basis. Continental production, and therefore the size of the fall population, is probably partially regulated by the number of birds that are distributed north and northwest into environments less favorable for successful reproduction. Thus, spacing of the birds in the Prairie-Parkland Area and the movement of a fraction of the birds out of the prime breeding areas may be key factors in the density-dependent regulation of the total mallard population.

  13. Temporal monitoring of vessels activity using day/night band in Suomi NPP on South China Sea

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Takashi; Asanuma, Ichio; Park, Jong Geol; Mackin, Kenneth J.; Mittleman, John

    2017-05-01

    In this research, we focus on vessel detection using the satellite imagery of day/night band (DNB) on Suomi NPP in order to monitor the change of vessel activity on the region of South China Sea. In this paper, we consider the relation between the temporal change of vessel activities and the events on maritime environment based on the vessel traffic density estimation using DNB. DNB is a moderate resolution (350-700m) satellite imagery but can detect the fishing light of fishery boats in night time for every day. The advantage of DNB is the continuous monitoring on wide area compared to another vessel detection and locating system. However, DNB gave strong influence of cloud and lunar refection. Therefore, we additionally used Brightness Temperature at 3.7μm(BT3.7) for cloud information. In our previous research, we construct an empirical vessel detection model that based on the DNB contrast and the estimation of cloud condition using BT3.7. Moreover, we proposed a vessel traffic density estimation method based on empirical model. In this paper, we construct the time temporal density estimation map on South China Sea and East China Sea in order to extract the knowledge from vessel activities change.

  14. An Equation of State for Hypersaline Water in Great Salt Lake, Utah, USA

    USGS Publications Warehouse

    Naftz, D.L.; Millero, F.J.; Jones, B.F.; Green, W.R.

    2011-01-01

    Great Salt Lake (GSL) is one of the largest and most saline lakes in the world. In order to accurately model limnological processes in GSL, hydrodynamic calculations require the precise estimation of water density (??) under a variety of environmental conditions. An equation of state was developed with water samples collected from GSL to estimate density as a function of salinity and water temperature. The ?? of water samples from the south arm of GSL was measured as a function of temperature ranging from 278 to 323 degrees Kelvin (oK) and conductivity salinities ranging from 23 to 182 g L-1 using an Anton Paar density meter. These results have been used to develop the following equation of state for GSL (?? = ?? 0.32 kg m-3): ?? - ??0 = 184.01062 + 1.04708 * S - 1.21061*T + 3.14721E - 4*S2 + 0.00199T2 where ??0 is the density of pure water in kg m-3, S is conductivity salinity g L-1, and T is water temperature in degrees Kelvin. ?? 2011 U.S. Government.

  15. Noncommuting observables in quantum detection and estimation theory

    NASA Technical Reports Server (NTRS)

    Helstrom, C. W.

    1972-01-01

    Basing decisions and estimates on simultaneous approximate measurements of noncommuting observables in a quantum receiver is shown to be equivalent to measuring commuting projection operators on a larger Hilbert space than that of the receiver itself. The quantum-mechanical Cramer-Rao inequalities derived from right logarithmic derivatives and symmetrized logarithmic derivatives of the density operator are compared, and it is shown that the latter give superior lower bounds on the error variances of individual unbiased estimates of arrival time and carrier frequency of a coherent signal. For a suitably weighted sum of the error variances of simultaneous estimates of these, the former yield the superior lower bound under some conditions.

  16. Natural sampling strategy

    NASA Technical Reports Server (NTRS)

    Hallum, C. R.; Basu, J. P. (Principal Investigator)

    1979-01-01

    A natural stratum-based sampling scheme and the aggregation procedures for estimating wheat area, yield, and production and their associated prediction error estimates are described. The methodology utilizes LANDSAT imagery and agrophysical data to permit an improved stratification in foreign areas by ignoring political boundaries and restratifying along boundaries that are more homogeneous with respect to the distribution of agricultural density, soil characteristics, and average climatic conditions. A summary of test results is given including a discussion of the various problems encountered.

  17. Polar bear aerial survey in the eastern Chukchi Sea: A pilot study

    USGS Publications Warehouse

    Evans, Thomas J.; Fischbach, Anthony S.; Schliebe, Scott L.; Manly, Bryan; Kalxdorff, Susanne B.; York, Geoff S.

    2003-01-01

    Alaska has two polar bear populations: the Southern Beaufort Sea population, shared with Canada, and the Chukchi/Bering Seas population, shared with Russia. Currently a reliable population estimate for the Chukchi/Bering Seas population does not exist. Land-based aerial and mark-recapture population surveys may not be possible in the Chukchi Sea because variable ice conditions, the limited range of helicopters, extremely large polar bear home ranges, and severe weather conditions may limit access to remote areas. Thus line-transect aerial surveys from icebreakers may be the best available tool to monitor this polar bear stock. In August 2000, a line-transect survey was conducted in the eastern Chukchi Sea and western Beaufort Sea from helicopters based on a U.S. Coast Guard icebreaker under the "Ship of Opportunity" program. The objectives of this pilot study were to estimate polar bear density in the eastern Chukchi and western Beaufort Seas and to assess the logistical feasibility of using ship-based aerial surveys to develop polar bear population estimates. Twenty-nine polar bears in 25 groups were sighted on 94 transects (8257 km). The density of bears was estimated as 1 bear per 147 km² (CV = 38%). Additional aerial surveys in late fall, using dedicated icebreakers, would be required to achieve the number of sightings, survey effort, coverage, and precision needed for more effective monitoring of population trends in the Chukchi Sea.

  18. Homogeneous buoyancy-generated turbulence

    NASA Technical Reports Server (NTRS)

    Batchelor, G. K.; Canuto, V. M.; Chasnov, J. R.

    1992-01-01

    Using a theoretical analysis of fundamental equations and a numerical simulation of the flow field, the statistically homogeneous motion that is generated by buoyancy forces after the creation of homogeneous random fluctuations in the density of infinite fluid at an initial instant is examined. It is shown that analytical results together with numerical results provide a comprehensive description of the 'birth, life, and death' of buoyancy-generated turbulence. Results of numerical simulations yielded the mean-square density mean-square velocity fluctuations and the associated spectra as functions of time for various initial conditions, and the time required for the mean-square density fluctuation to fall to a specified small value was estimated.

  19. Development and application of traffic density-based parameters for studying near-road air pollutant exposure

    EPA Science Inventory

    Increasingly human populations are living and/or working in close proximity to heavily travelled roadways. There is a growing body of research indicating a variety of health conditions are adversely affected by near-road air pollutants. To reliably estimate the health risk assoc...

  20. Development and application of traffic density-based Parameters for near-road air pollutant exposure

    EPA Science Inventory

    Increasingly human populations are living and/or working in close proximity to heavily travelled roadways. There is a growing body of research indicating a variety of health conditions are adversely affected by near-road air pollutants. To reliably estimate the health risk assoc...

  1. CONSTRAINTS ON THE PHYSICAL PROPERTIES OF MAIN BELT COMET P/2013 R3 FROM ITS BREAKUP EVENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirabayashi, Masatoshi; Sánchez, Diego Paul; Gabriel, Travis

    2014-07-01

    Jewitt et al. recently reported that main belt comet P/2013 R3 experienced a breakup, probably due to rotational disruption, with its components separating on mutually hyperbolic orbits. We propose a technique for constraining physical properties of the proto-body, especially the initial spin period and cohesive strength, as a function of the body's estimated size and density. The breakup conditions are developed by combining mutual orbit dynamics of the smaller components and the failure condition of the proto-body. Given a proto-body with a bulk density ranging from 1000 kg m{sup –3} to 1500 kg m{sup –3} (a typical range of the bulk density of C-type asteroids),more » we obtain possible values of the cohesive strength (40-210 Pa) and the initial spin state (0.48-1.9 hr). From this result, we conclude that although the proto-body could have been a rubble pile, it was likely spinning beyond its gravitational binding limit and would have needed cohesive strength to hold itself together. Additional observations of P/2013 R3 will enable stronger constraints on this event, and the present technique will be able to give more precise estimates of its internal structure.« less

  2. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  3. Dynamical interpretation of conditional patterns

    NASA Technical Reports Server (NTRS)

    Adrian, R. J.; Moser, R. D.; Moin, P.

    1988-01-01

    While great progress is being made in characterizing the 3-D structure of organized turbulent motions using conditional averaging analysis, there is a lack of theoretical guidance regarding the interpretation and utilization of such information. Questions concerning the significance of the structures, their contributions to various transport properties, and their dynamics cannot be answered without recourse to appropriate dynamical governing equations. One approach which addresses some of these questions uses the conditional fields as initial conditions and calculates their evolution from the Navier-Stokes equations, yielding valuable information about stability, growth, and longevity of the mean structure. To interpret statistical aspects of the structures, a different type of theory which deals with the structures in the context of their contributions to the statistics of the flow is needed. As a first step toward this end, an effort was made to integrate the structural information from the study of organized structures with a suitable statistical theory. This is done by stochastically estimating the two-point conditional averages that appear in the equation for the one-point probability density function, and relating the structures to the conditional stresses. Salient features of the estimates are identified, and the structure of the one-point estimates in channel flow is defined.

  4. The Baryonic and Dark Matter Distributions in Abell 401

    NASA Astrophysics Data System (ADS)

    Nevalainen, J.; Markevitch, M.; Forman, W.

    1999-11-01

    We combine spatially resolved ASCA temperature data with ROSAT imaging data to constrain the total mass distribution in the cluster A401, assuming that the cluster is in hydrostatic equilibrium, but without the assumption of gas isothermality. We obtain a total mass within the X-ray core (290 h-150 kpc) of 1.2+0.1-0.5×1014 h-150 Msolar at the 90% confidence level, 1.3 times larger than the isothermal estimate. The total mass within r500 (1.7 h-150 Mpc) is M500=0.9+0.3-0.2×1015 h-150 Msolar at 90% confidence, in agreement with the optical virial mass estimate, and 1.2 times smaller than the isothermal estimate. Our M500 value is 1.7 times smaller than that estimated using the mass-temperature scaling law predicted by simulations. The best-fit dark matter density profile scales as r-3.1 at large radii, which is consistent with the Navarro, Frenk & White (NFW) ``universal profile'' as well as the King profile of the galaxy density in A401. From the imaging data, the gas density profile is shallower than the dark matter profile, scaling as r-2.1 at large radii, leading to a monotonically increasing gas mass fraction with radius. Within r500 the gas mass fraction reaches a value of fgas=0.21+0.06-0.05 h-3/250 (90% confidence errors). Assuming that fgas (plus an estimate of the stellar mass) is the universal value of the baryon fraction, we estimate the 90% confidence upper limit of the cosmological matter density to be Ωm<0.31, in conflict with an Einstein-deSitter universe. Even though the NFW dark matter density profile is statistically consistent with the temperature data, its central temperature cusp would lead to convective instability at the center, because the gas density does not have a corresponding peak. One way to reconcile a cusp-shaped total mass profile with the observed gas density profile, regardless of the temperature data, is to introduce a significant nonthermal pressure in the center. Such a pressure must satisfy the hydrostatic equilibrium condition without inducing turbulence. Alternately, significant mass drop-out from the cooling flow would make the temperature less peaked and the NFW profile acceptable. However, the quality of data is not adequate to test this possibility.

  5. Rhodnius prolixus and Rhodnius robustus-like (Hemiptera, Reduviidae) wing asymmetry under controlled conditions of population density and feeding frequency.

    PubMed

    Márquez, E J; Saldamando-Benjumea, C I

    2013-09-01

    Habitat change in Rhodnius spp may represent an environmental challenge for the development of the species, particularly when feeding frequency and population density vary in nature. To estimate the effect of these variables in stability on development, the degree of directional asymmetry (DA) and fluctuating asymmetry (FA) in the wing size and shape of R. prolixus and R. robustus-like were measured under laboratory controlled conditions. DA and FA in wing size and shape were significant in both species, but their variation patterns showed both inter-specific and sexual dimorphic differences in FA of wing size and shape induced by nutrition stress. These results suggest different abilities of the genotypes and sexes of two sylvatic and domestic genotypes of Rhodnius to buffer these stress conditions. However, both species showed non-significant differences in the levels of FA between treatments that simulated sylvan vs domestic conditions, indicating that the developmental noise did not explain the variation in wing size and shape found in previous studies. Thus, this result confirm that the variation in wing size and shape in response to treatments constitute a plastic response of these genotypes to population density and feeding frequency.

  6. Monte Carlo simulation of hard spheres near random closest packing using spherical boundary conditions

    NASA Astrophysics Data System (ADS)

    Tobochnik, Jan; Chapin, Phillip M.

    1988-05-01

    Monte Carlo simulations were performed for hard disks on the surface of an ordinary sphere and hard spheres on the surface of a four-dimensional hypersphere. Starting from the low density fluid the density was increased to obtain metastable amorphous states at densities higher than previously achieved. Above the freezing density the inverse pressure decreases linearly with density, reaching zero at packing fractions equal to 68% for hard spheres and 84% for hard disks. Using these new estimates for random closest packing and coefficients from the virial series we obtain an equation of state which fits all the data up to random closest packing. Usually, the radial distribution function showed the typical split second peak characteristic of amorphous solids and glasses. High density systems which lacked this split second peak and showed other sharp peaks were interpreted as signaling the onset of crystal nucleation.

  7. Brain Tissue Compartment Density Estimated Using Diffusion-Weighted MRI Yields Tissue Parameters Consistent With Histology

    PubMed Central

    Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi

    2015-01-01

    We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639

  8. Estimating Bat and Bird Mortality Occurring at Wind Energy Turbines from Covariates and Carcass Searches Using Mixture Models

    PubMed Central

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production. PMID:23844144

  9. Estimating bat and bird mortality occurring at wind energy turbines from covariates and carcass searches using mixture models.

    PubMed

    Korner-Nievergelt, Fränzi; Brinkmann, Robert; Niermann, Ivo; Behr, Oliver

    2013-01-01

    Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

  10. Density Estimation for New Solid and Liquid Explosives

    DTIC Science & Technology

    1977-02-17

    The group additivity approach was shown to be applicable to density estimation. The densities of approximately 180 explosives and related compounds... of very diverse compositions were estimated, and almost all the estimates were quite reasonable. Of the 168 compounds for which direct comparisons...could be made (see Table 6), 36.9% of the estimated densities were within 1% of the measured densities, 33.3% were within 1-2%, 11.9% were within 2-3

  11. Testing the gravitational instability hypothesis?

    NASA Technical Reports Server (NTRS)

    Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.

    1994-01-01

    We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.

  12. High-Areal-Density Fuel Assembly in Direct-Drive Cryogenic Implosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sangster, T. C.; Goncharov, V. N.; Radha, P. B.

    The first observation of ignition-relevant areal-density deuterium from implosions of capsules with cryogenic fuel layers at ignition-relevant adiabats is reported. The experiments were performed on the 60-beam, 30-kJ{sub UV} OMEGA Laser System [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. Neutron-averaged areal densities of 202{+-}7 mg/cm{sup 2} and 182{+-}7 mg/cm{sup 2} (corresponding to estimated peak fuel densities in excess of 100 g/cm{sup 3}) were inferred using an 18-kJ direct-drive pulse designed to put the converging fuel on an adiabat of 2.5. These areal densities are in good agreement with the predictions of hydrodynamic simulations indicating that the fuelmore » adiabat can be accurately controlled under ignition-relevant conditions.« less

  13. Optimal estimation retrieval of aerosol microphysical properties from SAGE~II satellite observations in the volcanically unperturbed lower stratosphere

    NASA Astrophysics Data System (ADS)

    Wurl, D.; Grainger, R. G.; McDonald, A. J.; Deshler, T.

    2010-05-01

    Stratospheric aerosol particles under non-volcanic conditions are typically smaller than 0.1 μm. Due to fundamental limitations of the scattering theory in the Rayleigh limit, these tiny particles are hard to measure by satellite instruments. As a consequence, current estimates of global aerosol properties retrieved from spectral aerosol extinction measurements tend to be strongly biased. Aerosol surface area densities, for instance, are observed to be about 40% smaller than those derived from correlative in situ measurements (Deshler et al., 2003). An accurate knowledge of the global distribution of aerosol properties is, however, essential to better understand and quantify the role they play in atmospheric chemistry, dynamics, radiation and climate. To address this need a new retrieval algorithm was developed, which employs a nonlinear Optimal Estimation (OE) method to iteratively solve for the monomodal size distribution parameters which are statistically most consistent with both the satellite-measured multi-wavelength aerosol extinction data and a priori information. By thus combining spectral extinction measurements (at visible to near infrared wavelengths) with prior knowledge of aerosol properties at background level, even the smallest particles are taken into account which are practically invisible to optical remote sensing instruments. The performance of the OE retrieval algorithm was assessed based on synthetic spectral extinction data generated from both monomodal and small-mode-dominant bimodal sulphuric acid aerosol size distributions. For monomodal background aerosol, the new algorithm was shown to fairly accurately retrieve the particle sizes and associated integrated properties (surface area and volume densities), even in the presence of large extinction uncertainty. The associated retrieved uncertainties are a good estimate of the true errors. In the case of bimodal background aerosol, where the retrieved (monomodal) size distributions naturally differ from the correct bimodal values, the associated surface area (A) and volume densities (V) are, nevertheless, fairly accurately retrieved, except at values larger than 1.0 μm2 cm-3 (A) and 0.05 μm3 cm-3 (V), where they tend to underestimate the true bimodal values. Due to the limited information content in the SAGE II spectral extinction measurements this kind of forward model error cannot be avoided here. Nevertheless, the retrieved uncertainties are a good estimate of the true errors in the retrieved integrated properties, except where the surface area density exceeds the 1.0 μm2 cm-3 threshold. When applied to near-global SAGE II satellite extinction measured in 1999 the retrieved OE surface area and volume densities are observed to be larger by, respectively, 20-50% and 10-40% compared to those estimates obtained by the SAGE~II operational retrieval algorithm. An examination of the OE algorithm biases with in situ data indicates that the new OE aerosol property estimates tend to be more realistic than previous estimates obtained from remotely sensed data through other retrieval techniques. Based on the results of this study we therefore suggest that the new Optimal Estimation retrieval algorithm is able to contribute to an advancement in aerosol research by considerably improving current estimates of aerosol properties in the lower stratosphere under low aerosol loading conditions.

  14. Estimating Lion Abundance using N-mixture Models for Social Species

    PubMed Central

    Belant, Jerrold L.; Bled, Florent; Wilton, Clay M.; Fyumagwa, Robert; Mwampeta, Stanslaus B.; Beyer, Dean E.

    2016-01-01

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170–551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species. PMID:27786283

  15. Estimating Lion Abundance using N-mixture Models for Social Species.

    PubMed

    Belant, Jerrold L; Bled, Florent; Wilton, Clay M; Fyumagwa, Robert; Mwampeta, Stanslaus B; Beyer, Dean E

    2016-10-27

    Declining populations of large carnivores worldwide, and the complexities of managing human-carnivore conflicts, require accurate population estimates of large carnivores to promote their long-term persistence through well-informed management We used N-mixture models to estimate lion (Panthera leo) abundance from call-in and track surveys in southeastern Serengeti National Park, Tanzania. Because of potential habituation to broadcasted calls and social behavior, we developed a hierarchical observation process within the N-mixture model conditioning lion detectability on their group response to call-ins and individual detection probabilities. We estimated 270 lions (95% credible interval = 170-551) using call-ins but were unable to estimate lion abundance from track data. We found a weak negative relationship between predicted track density and predicted lion abundance from the call-in surveys. Luminosity was negatively correlated with individual detection probability during call-in surveys. Lion abundance and track density were influenced by landcover, but direction of the corresponding effects were undetermined. N-mixture models allowed us to incorporate multiple parameters (e.g., landcover, luminosity, observer effect) influencing lion abundance and probability of detection directly into abundance estimates. We suggest that N-mixture models employing a hierarchical observation process can be used to estimate abundance of other social, herding, and grouping species.

  16. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Washeleski, Robert L.; Meyer, Edmond J. IV; King, Lyon B.

    2013-10-15

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. Themore » key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.« less

  17. Application of maximum likelihood methods to laser Thomson scattering measurements of low density plasmas.

    PubMed

    Washeleski, Robert L; Meyer, Edmond J; King, Lyon B

    2013-10-01

    Laser Thomson scattering (LTS) is an established plasma diagnostic technique that has seen recent application to low density plasmas. It is difficult to perform LTS measurements when the scattered signal is weak as a result of low electron number density, poor optical access to the plasma, or both. Photon counting methods are often implemented in order to perform measurements in these low signal conditions. However, photon counting measurements performed with photo-multiplier tubes are time consuming and multi-photon arrivals are incorrectly recorded. In order to overcome these shortcomings a new data analysis method based on maximum likelihood estimation was developed. The key feature of this new data processing method is the inclusion of non-arrival events in determining the scattered Thomson signal. Maximum likelihood estimation and its application to Thomson scattering at low signal levels is presented and application of the new processing method to LTS measurements performed in the plume of a 2-kW Hall-effect thruster is discussed.

  18. [Demography and nesting ecology of green iguana, Iguana iguana (Squamata: Iguanidae), in 2 exploited populations in Depresión Momposina, Colombia].

    PubMed

    Muñoz, Eliana M; Ortega, Angela M; Bock, Brian C; Páez, Vivian P

    2003-03-01

    We studied the demography and nesting ecology of two populations of Iguana iguana that face heavy exploitation and habitat modification in the Momposina Depression, Colombia. Lineal transect data was analyzed using the Fourier model to provide estimates of social group densities, which was found to differ both within and among populations (1.05-6.0 groups/ha). Mean group size and overall iguana density estimates varied between populations as well (1.5-13.7 iguanas/ha). The density estimates were far lower than those reported from more protected areas in Panama and Venezuela. Iguana densities were consistently higher in sites located along rivers (2.5 iguanas/group) than in sites along the margin of marshes, probably due to vegetational differences (1.5 iguanas/group). There was no correlation between density estimates and estimates of relative abundance (number of iguanas seen/hour/person) due to differing detectabilities of iguana groups among sites. The adult sex ratio (1:2.5 males:females) agreed well with other reports in the literature based upon observation of adult social groups, and probably results from the polygynous mating system in this species rather than a real demographic skew. Nesting in this population occurs from the end of January through March and hatching occurs between April and May. We monitored 34 nests, which suffered little vertebrate predation, perhaps due to the lack of a complete vertebrate fauna in this densely inhabited area, but nests suffered from inundation, cattle trampling, and infestation by phorid fly larvae. Clutch sizes in these populations were lower than all other published reports except for the iguana population on the highly xeric island of Curaçao, implying that adult females in our area are unusually small. We argue that this is more likely the result of the exploitation of these populations rather than an adaptive response to environmentally extreme conditions.

  19. Driving range estimation for electric vehicles based on driving condition identification and forecast

    NASA Astrophysics Data System (ADS)

    Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei

    2017-10-01

    With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.

  20. Savanna elephant numbers are only a quarter of their expected values

    PubMed Central

    Robson, Ashley S.; Trimble, Morgan J.; Purdon, Andrew; Young-Overton, Kim D.; Pimm, Stuart L.; van Aarde, Rudi J.

    2017-01-01

    Savannas once constituted the range of many species that human encroachment has now reduced to a fraction of their former distribution. Many survive only in protected areas. Poaching reduces the savanna elephant, even where protected, likely to the detriment of savanna ecosystems. While resources go into estimating elephant populations, an ecological benchmark by which to assess counts is lacking. Knowing how many elephants there are and how many poachers kill is important, but on their own, such data lack context. We collated savanna elephant count data from 73 protected areas across the continent estimated to hold ~50% of Africa’s elephants and extracted densities from 18 broadly stable population time series. We modeled these densities using primary productivity, water availability, and an index of poaching as predictors. We then used the model to predict stable densities given current conditions and poaching for all 73 populations. Next, to generate ecological benchmarks, we predicted such densities for a scenario of zero poaching. Where historical data are available, they corroborate or exceed benchmarks. According to recent counts, collectively, the 73 savanna elephant populations are at 75% of the size predicted based on current conditions and poaching levels. However, populations are at <25% of ecological benchmarks given a scenario of zero poaching (~967,000)—a total deficit of ~730,000 elephants. Populations in 30% of the 73 protected areas were <5% of their benchmarks, and the median current density as a percentage of ecological benchmark across protected areas was just 13%. The ecological context provided by these benchmark values, in conjunction with ongoing census projects, allow efficient targeting of conservation efforts. PMID:28414784

  1. Population ecology of polar bears in Davis Strait, Canada and Greenland

    USGS Publications Warehouse

    Peacock, Elizabeth; Taylor, Mitchell K.; Laake, Jeffrey L.; Stirling, Ian

    2013-01-01

    Until recently, the sea ice habitat of polar bears was understood to be variable, but environmental variability was considered to be cyclic or random, rather than progressive. Harvested populations were believed to be at levels where density effects were considered not significant. However, because we now understand that polar bear demography can also be influenced by progressive change in the environment, and some populations have increased to greater densities than historically lower numbers, a broader suite of factors should be considered in demographic studies and management. We analyzed 35 years of capture and harvest data from the polar bear (Ursus maritimus) subpopulation in Davis Strait, including data from a new study (2005–2007), to quantify its current demography. We estimated the population size in 2007 to be 2,158 ± 180 (SE), a likely increase from the 1970s. We detected variation in survival, reproductive rates, and age-structure of polar bears from geographic sub-regions. Survival and reproduction of bears in southern Davis Strait was greater than in the north and tied to a concurrent dramatic increase in breeding harp seals (Pagophilus groenlandicus) in Labrador. The most supported survival models contained geographic and temporal variables. Harp seal abundance was significantly related to polar bear survival. Our estimates of declining harvest recovery rate, and increasing total survival, suggest that the rate of harvest declined over time. Low recruitment rates, average adult survival rates, and high population density, in an environment of high prey density, but deteriorating and variable ice conditions, currently characterize the Davis Strait polar bears. Low reproductive rates may reflect negative effects of greater densities or worsening ice conditions.

  2. Estimating means and variances: The comparative efficiency of composite and grab samples.

    PubMed

    Brumelle, S; Nemetz, P; Casey, D

    1984-03-01

    This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.

  3. A strategic assessment of crown fire hazard in Montana: potential effectiveness and costs of hazard reduction treatments.

    Treesearch

    Carl E. Fiedler; Charles E. Keegan; Christopher W. Woodall; Todd A. Morgan

    2004-01-01

    Estimates of crown fire hazard are presented for existing forest conditions in Montana by density class, structural class, forest type, and landownership. Three hazard reduction treatments were evaluated for their effectiveness in treating historically fire-adapted forests (ponderosa pine (Pinus ponderosa Dougl. ex Laws.), Douglas-fir (...

  4. Planar imaging of OH density distributions in a supersonic combustion tunnel

    NASA Technical Reports Server (NTRS)

    Quagliaroli, T. M.; Laufer, G.; Krauss, R. H.; Mcdaniel, J. C., Jr.

    1993-01-01

    Images of absolute OH number density were obtained using planar laser-induced fluorescence (PLIF) in a supersonic H2-air combustion tunnel. A tunable KrF excimer laser was used to excite the Q2(11) ro-vibronic line. Calibration of the PLIF images was obtained by referencing the signal measured in the flame to that obtained by the excitation of OH produced by thermal dissociation of H2O in an atmospheric furnace. Measurement errors due to uncertainty in internal furnace atmospheric conditions and image temperature correction are estimated.

  5. The Impact of Back-Sputtered Carbon on the Accelerator Grid Wear Rates of the NEXT and NSTAR Ion Thrusters

    NASA Technical Reports Server (NTRS)

    Soulas, George C.

    2013-01-01

    A study was conducted to quantify the impact of back-sputtered carbon on the downstream accelerator grid erosion rates of the NEXT (NASA's Evolutionary Xenon Thruster) Long Duration Test (LDT1). A similar analysis that was conducted for the NSTAR (NASA's Solar Electric Propulsion Technology Applications Readiness Program) Life Demonstration Test (LDT2) was used as a foundation for the analysis developed herein. A new carbon surface coverage model was developed that accounted for multiple carbon adlayers before complete surface coverage is achieved. The resulting model requires knowledge of more model inputs, so they were conservatively estimated using the results of past thin film sputtering studies and particle reflection predictions. In addition, accelerator current densities across the grid were rigorously determined using an ion optics code to determine accelerator current distributions and an algorithm to determine beam current densities along a grid using downstream measurements. The improved analysis was applied to the NSTAR test results for evaluation. The improved analysis demonstrated that the impact of back-sputtered carbon on pit and groove wear rate for the NSTAR LDT2 was negligible throughout most of eroded grid radius. The improved analysis also predicted the accelerator current density for transition from net erosion to net deposition considerably more accurately than the original analysis. The improved analysis was used to estimate the impact of back-sputtered carbon on the accelerator grid pit and groove wear rate of the NEXT Long Duration Test (LDT1). Unlike the NSTAR analysis, the NEXT analysis was more challenging because the thruster was operated for extended durations at various operating conditions and was unavailable for measurements because the test is ongoing. As a result, the NEXT LDT1 estimates presented herein are considered preliminary until the results of future posttest analyses are incorporated. The worst-case impact of carbon back-sputtering was determined to be the full power operating condition, but the maximum impact of back-sputtered carbon was only a four percent reduction in wear rate. As a result, back-sputtered carbon is estimated to have an insignificant impact on the first failure mode of the NEXT LDT at all operating conditions.

  6. Breast density estimation from high spectral and spatial resolution MRI

    PubMed Central

    Li, Hui; Weiss, William A.; Medved, Milica; Abe, Hiroyuki; Newstead, Gillian M.; Karczmar, Gregory S.; Giger, Maryellen L.

    2016-01-01

    Abstract. A three-dimensional breast density estimation method is presented for high spectral and spatial resolution (HiSS) MR imaging. Twenty-two patients were recruited (under an Institutional Review Board--approved Health Insurance Portability and Accountability Act-compliant protocol) for high-risk breast cancer screening. Each patient received standard-of-care clinical digital x-ray mammograms and MR scans, as well as HiSS scans. The algorithm for breast density estimation includes breast mask generating, breast skin removal, and breast percentage density calculation. The inter- and intra-user variabilities of the HiSS-based density estimation were determined using correlation analysis and limits of agreement. Correlation analysis was also performed between the HiSS-based density estimation and radiologists’ breast imaging-reporting and data system (BI-RADS) density ratings. A correlation coefficient of 0.91 (p<0.0001) was obtained between left and right breast density estimations. An interclass correlation coefficient of 0.99 (p<0.0001) indicated high reliability for the inter-user variability of the HiSS-based breast density estimations. A moderate correlation coefficient of 0.55 (p=0.0076) was observed between HiSS-based breast density estimations and radiologists’ BI-RADS. In summary, an objective density estimation method using HiSS spectral data from breast MRI was developed. The high reproducibility with low inter- and low intra-user variabilities shown in this preliminary study suggest that such a HiSS-based density metric may be potentially beneficial in programs requiring breast density such as in breast cancer risk assessment and monitoring effects of therapy. PMID:28042590

  7. Electron diffusion deduced from eiscat

    NASA Astrophysics Data System (ADS)

    Roettger, J.; Fukao, S.

    The EISCAT Svalbard Radar (ESR) operates on 500 MHz; collocated with it is the SOUSY Svalbard Radar (SSR), which operates on 53.5 MHz. We have used both radars during Polar Mesosphere Summer Echoes (PMSE) coherent scatter conditions, where the ESR can also detect incoherent scatter and thus allows to estimate the electron density. We describe obser-vations during two observing periods in summer 1999 and 2000. Well calibrated sig-nal power was obtained with both radars, from which we deduced the radar reflec-tivity. Estimating the turbulence dissipation rate from the narrow beam observations of PMSE with the ESR, using the estimate of the electron density and the radar reflec-tivity on both frequencies we can obtain estimates of the Schmidt number by compar-ing our observational results with the model of Cho and Kelley (1993). Schmidt num-bers of at least 100 are necessary to obtain the measured radar reflectivities, which ba-sically support the model of Cho and Kelley claiming that the inertial-viscous subrange in the electron gas can extend down to small scales of some ten centimeters (namely, the Bragg scale of the ESR).

  8. Lake whitefish diet, condition, and energy density in Lake Champlain and the lower four Great Lakes following dreissenid invasions

    USGS Publications Warehouse

    Herbst, Seth J.; Marsden, J. Ellen; Lantry, Brian F.

    2013-01-01

    Lake Whitefish Coregonus clupeaformis support some of the most valuable commercial freshwater fisheries in North America. Recent growth and condition decreases in Lake Whitefish populations in the Great Lakes have been attributed to the invasion of the dreissenid mussels, zebra mussels Dreissena polymorpha and quagga mussels D. bugensis, and the subsequent collapse of the amphipod, Diporeia, a once-abundant high energy prey source. Since 1993, Lake Champlain has also experienced the invasion and proliferation of zebra mussels, but in contrast to the Great Lakes, Diporeia were not historically abundant. We compared the diet, condition, and energy density of Lake Whitefish from Lake Champlain after the dreissenid mussel invasion to values for those of Lake Whitefish from Lakes Michigan, Huron, Erie, and Ontario. Lake Whitefish were collected using gill nets and bottom trawls, and their diets were quantified seasonally. Condition was estimated using Fulton's condition factor (K) and by determining energy density. In contrast to Lake Whitefish from some of the Great Lakes, those from Lake Champlain Lake Whitefish did not show a dietary shift towards dreissenid mussels, but instead fed primarily on fish eggs in spring, Mysis diluviana in summer, and gastropods and sphaeriids in fall and winter. Along with these dietary differences, the condition and energy density of Lake Whitefish from Lake Champlain were high compared with those of Lake Whitefish from Lakes Michigan, Huron, and Ontario after the dreissenid invasion, and were similar to Lake Whitefish from Lake Erie; fish from Lakes Michigan, Huron, and Ontario consumed dreissenids, whereas fish from Lake Erie did not. Our comparisons of Lake Whitefish populations in Lake Champlain to those in the Great Lakes indicate that diet and condition of Lake Champlain Lake Whitefish were not negatively affected by the dreissenid mussel invasion.

  9. An algorithm to estimate building heights from Google street-view imagery using single view metrology across a representational state transfer system

    NASA Astrophysics Data System (ADS)

    Díaz, Elkin; Arguello, Henry

    2016-05-01

    Urban ecosystem studies require monitoring, controlling and planning to analyze building density, urban density, urban planning, atmospheric modeling and land use. In urban planning, there are many methods for building height estimation using optical remote sensing images. These methods however, highly depend on sun illumination and cloud-free weather. In contrast, high resolution synthetic aperture radar provides images independent from daytime and weather conditions, although, these images rely on special hardware and expensive acquisition. Most of the biggest cities around the world have been photographed by Google street view under different conditions. Thus, thousands of images from the principal streets of a city can be accessed online. The availability of this and similar rich city imagery such as StreetSide from Microsoft, represents huge opportunities in computer vision because these images can be used as input in many applications such as 3D modeling, segmentation, recognition and stereo correspondence. This paper proposes a novel algorithm to estimate building heights using public Google Street-View imagery. The objective of this work is to obtain thousands of geo-referenced images from Google Street-View using a representational state transfer system, and estimate their average height using single view metrology. Furthermore, the resulting measurements and image metadata are used to derive a layer of heights in a Google map available online. The experimental results show that the proposed algorithm can estimate an accurate average building height map of thousands of images using Google Street-View Imagery of any city.

  10. Domain wall suppression in trapped mixtures of Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Pepe, Francesco V.; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio

    2012-08-01

    The ground-state energy of a binary mixture of Bose-Einstein condensates can be estimated for large atomic samples by making use of suitably regularized Thomas-Fermi density profiles. By exploiting a variational method on the trial densities the energy can be computed by explicitly taking into account the normalization condition. This yields analytical results and provides the basis for further improvement of the approximation. As a case study, we consider a binary mixture of 87Rb atoms in two different hyperfine states in a double-well potential and discuss the energy crossing between density profiles with different numbers of domain walls, as the number of particles and the interspecies interaction vary.

  11. An ab-initio investigation on SrLa intermetallic compound

    NASA Astrophysics Data System (ADS)

    Kumar, S. Ramesh; Jaiganesh, G.; Jayalakshmi, V.

    2018-05-01

    The electronic, elastic and thermodynamic property of CsCl-type SrLa are investigated through density functional theory. The energy-volume relation for this compound has been obtained. The band structure, density of states and charge density in (110) plane are also examined. The elastic constants (C11, C12 and C44) of SrLa is computed, then, using these elastic constants, the bulk moduli, shear moduli, Young's moduli and Poisson's ratio are also derived. The calculated results showed that CsCl-type SrLa is ductile at ambient conditions. The thermodynamic quantities such as free energy, entropy and heat capacity as a function of temperature are estimated and the results obtained are discussed.

  12. New bioreactor for in situ simultaneous measurement of bioluminescence and cell density

    NASA Astrophysics Data System (ADS)

    Picart, Pascal; Bendriaa, Loubna; Daniel, Philippe; Horry, Habib; Durand, Marie-José; Jouvanneau, Laurent; Thouand, Gérald

    2004-03-01

    This article presents a new device devoted to the simultaneous measurement of bioluminescence and optical density of a bioluminescent bacterial culture. It features an optoelectronic bioreactor with a fully autoclavable module, in which the bioluminescent bacteria are cultivated, a modulated laser diode dedicated to optical density measurement, and a detection head for the acquisition of both bioluminescence and optical density signals. Light is detected through a bifurcated fiber bundle. This setup allows the simultaneous estimation of the bioluminescence and the cell density of the culture medium without any sampling. The bioluminescence is measured through a highly sensitive photomultiplier unit which has been photometrically calibrated to allow light flux measurements. This was achieved by considering the bioluminescence spectrum and the full optical transmission of the device. The instrument makes it possible to measure a very weak light flux of only a few pW. The optical density is determined through the laser diode and a photodiode using numerical synchronous detection which is based on the power spectrum density of the recorded signal. The detection was calibrated to measure optical density up to 2.5. The device was validated using the Vibrio fischeri bacterium which was cultivated under continuous culture conditions. A very good correlation between manual and automatic measurements processed with this instrument has been demonstrated. Furthermore, the optoelectronic bioreactor enables determination of the luminance of the bioluminescent bacteria which is estimated to be 6×10-5 W sr-1 m-2 for optical density=0.3. Experimental results are presented and discussed.

  13. Time-Average Measurement of Velocity, Density, Temperature, and Turbulence Using Molecular Rayleigh Scattering

    NASA Technical Reports Server (NTRS)

    Mielke, Amy F.; Seasholtz, Richard G.; Elam, Krisie A.; Panda, Jayanta

    2004-01-01

    Measurement of time-averaged velocity, density, temperature, and turbulence in gas flows using a nonintrusive, point-wise measurement technique based on molecular Rayleigh scattering is discussed. Subsonic and supersonic flows in a 25.4-mm diameter free jet facility were studied. The developed instrumentation utilizes a Fabry-Perot interferometer to spectrally resolve molecularly scattered light from a laser beam passed through a gas flow. The spectrum of the scattered light contains information about velocity, density, and temperature of the gas. The technique uses a slow scan, low noise 16-bit depth CCD camera to record images of the fringes formed by Rayleigh scattered light passing through the interferometer. A kinetic theory model of the Rayleigh scattered light is used in a nonlinear least squares fitting routine to estimate the unknown parameters from the fringe images. The ability to extract turbulence information from the fringe image data proved to be a challenge since the fringe is broadened by not only turbulence, but also thermal fluctuations and aperture effects from collecting light over a range of scattering angles. Figure 1 illustrates broadening of a Rayleigh spectrum typical of flow conditions observed in this work due to aperture effects and turbulence for a scattering angle, chi(sub s), of 90 degrees, f/3.67 collection optics, mean flow velocity, u(sub k), of 300 m/s, and turbulent velocity fluctuations, sigma (sub uk), of 55 m/s. The greatest difficulty in processing the image data was decoupling the thermal and turbulence broadening in the spectrum. To aid in this endeavor, it was necessary to seed the ambient air with smoke and dust particulates; taking advantage of the turbulence broadening in the Mie scattering component of the spectrum of the collected light (not shown in the figure). The primary jet flow was not seeded due to the difficulty of the task. For measurement points lacking particles, velocity, density, and temperature information could reliably be recovered, however the turbulence estimates contained significant uncertainty. Resulting flow parameter estimates are presented for surveys of Mach 0.6, 0.95, and 1.4 jet flows. Velocity, density, and temperature were determined with accuracies of 5 m/s, 1.5%, and 1%, respectively, in flows with no particles present, and with accuracies of 5 m/s, 1-4%, and 2% in flows with particles. Comparison with hotwire data for the Mach 0.6 condition demonstrated turbulence estimates with accuracies of about 5 m/s outside the jet core where Mie scattering from dust/smoke particulates aided in the estimation of turbulence. Turbulence estimates could not be recovered with any significant accuracy for measurement points where no particles were present.

  14. Development of a spatially distributed model of fish population density for habitat assessment of rivers

    NASA Astrophysics Data System (ADS)

    Sui, Pengzhe; Iwasaki, Akito; Ryo, Masahiro; Saavedra, Oliver; Yoshimura, Chihiro

    2013-04-01

    Flow conditions play an important role in sustaining biodiversity of river ecosystem. However, their relations to freshwater fishes, especially to fish population density, have not been clearly described. This study, therefore, aimed to propose a new methodology to quantitatively link habitat conditions, including flow conditions and other physical conditions, to population density of fish species. We developed a basin-scale fish distribution model by integrating the concept of habitat suitability assessment with a distributed hydrological model (DHM) in order to estimate fish population density with particular attention to flow conditions. Generalized linear model (GLM) was employed to evaluate the relationship between population density of fish species and major environmental factors. The target basin was Sagami River in central Japan, where the river reach was divided into 10 sections by estuary, confluences of tributaries, and river-crossing structures (dams, weirs). The DHM was employed to simulate river discharge from 1998 to 2005, which was used to calculate 10 flow indices including mean discharge, 25th and 75th percentile discharge, duration of low and high flows, number of floods. In addition, 5 water quality parameters and 13 other physical conditions (such as basin area, river width, mean diameter of riverbed material, and number of river-crossing structures upstream and downstream) of each river section were considered as environmental variables. In case of Sagami River, 10 habitat variables among them were then selected based on their correlations to avoid multicollinearity. Finally, the best GLM was developed for each species based on Akaike's information criterion. As results, population densities of 16 fish species in Sagami River were modelled, and correlation coefficients between observed and calculated population densities for 10 species were more than 0.70. The key habitat factors for population density varied among fish species. Minimum discharge (MID) was found to be positively correlated to 9 among 16 fish species. For duration of high and low flows (DHF and DLF), longer DHF/DLF was corresponded to lower population density for 7/6 fish species, respectively, such as Rhinogobius kurodai and Plecoglossus altivelis altivelis. Among physical habitat conditions, sinuosity index (SI, the ratio between actual river section length and straight line length) seems to be the most important parameter for fish population density in Sagami River basin, since it affects 12 out of 16 fish species, followed by mean longitudinal slope (S) and number of downstream dams (NLD). Above results demonstrated the applicability of fish distribution model to provide quantitative information on flow conditions required to maintain fish population, which enabled us to evaluate and project ecological consequences of water resource management policy, such as flood management and water withdrawal.

  15. Ant-inspired density estimation via random walks.

    PubMed

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  16. A framework for estimating the determinants of spatial and temporal variation in vital rates and inferring the occurrence of unobserved extreme events

    PubMed Central

    Jesenšek, Dušan; Crivelli, Alain J.

    2018-01-01

    We develop a general framework that combines long-term tag–recapture data and powerful statistical and modelling techniques to investigate how population, environmental and climate factors determine variation in vital rates and population dynamics in an animal species, using as a case study the population of brown trout living in Upper Volaja (Western Slovenia). This population has been monitored since 2004. Upper Volaja is a sink, receiving individuals from a source population living above a waterfall. We estimate the numerical contribution of the source population on the sink population and test the effects of temperature, population density and extreme events on variation in vital rates among 2647 individually tagged brown trout. We found that individuals dispersing downstream from the source population help maintain high population densities in the sink population despite poor recruitment. The best model of survival for individuals older than juveniles includes additive effects of birth cohort and sampling occasion. Fast growth of older cohorts and higher population densities in 2004–2005 suggest very low population densities in the late 1990s, which we hypothesize were caused by a flash flood that strongly reduced population size and created the habitat conditions for faster individual growth and transient higher population densities after the extreme event. PMID:29657746

  17. A framework for estimating the determinants of spatial and temporal variation in vital rates and inferring the occurrence of unobserved extreme events.

    PubMed

    Vincenzi, Simone; Jesenšek, Dušan; Crivelli, Alain J

    2018-03-01

    We develop a general framework that combines long-term tag-recapture data and powerful statistical and modelling techniques to investigate how population, environmental and climate factors determine variation in vital rates and population dynamics in an animal species, using as a case study the population of brown trout living in Upper Volaja (Western Slovenia). This population has been monitored since 2004. Upper Volaja is a sink, receiving individuals from a source population living above a waterfall. We estimate the numerical contribution of the source population on the sink population and test the effects of temperature, population density and extreme events on variation in vital rates among 2647 individually tagged brown trout. We found that individuals dispersing downstream from the source population help maintain high population densities in the sink population despite poor recruitment. The best model of survival for individuals older than juveniles includes additive effects of birth cohort and sampling occasion. Fast growth of older cohorts and higher population densities in 2004-2005 suggest very low population densities in the late 1990s, which we hypothesize were caused by a flash flood that strongly reduced population size and created the habitat conditions for faster individual growth and transient higher population densities after the extreme event.

  18. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  19. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    PubMed

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.

  20. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE PAGES

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...

    2017-08-25

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  1. Effects of scale of movement, detection probability, and true population density on common methods of estimating population density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.

    Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less

  2. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error.

    PubMed

    Carroll, Raymond J; Delaigle, Aurore; Hall, Peter

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  3. Bedforms formed by experimental supercritical density flows

    NASA Astrophysics Data System (ADS)

    Naruse, Hajime; Izumi, Norihiro; Yokokawa, Miwa; Muto, Tetsuji

    2014-05-01

    This study reveals characteristics and formative conditions of bedforms produced by saline density flows in supercritical flow conditions, especially focusing on the mechanism of the formation of plane bed. The motion of sediment particles forming bedforms was resolved by high-speed cameras (1/1000 frame/seconds). Experimental density flows were produced by mixtures of salt water (1.01-1.04 in density) and plastic particles (1.5 in specific density, 140 or 240 mm in diameter). Salt water and plastic particles are analogue materials of muddy water and sand particles in turbidity currents respectively. Acrylic flume (4.0 m long, 2.0 cm wide and 0.5 m deep) was submerged in an experimental tank (6.0 m long, 1.8 m wide and 1.2 m deep) that was filled by clear water. Features of bedforms were observed when the bed state in the flume reached equilibrium condition. The experimental conditions range 1.5-4.2 in densimetric Froude number and 0.2-0.8 in Shields dimensionless stress. We report the two major discoveries as a result of the flume experiments: (1) Plane bed under Froude-supercritical flows and (2) Geometrical characteristics of cyclic steps formed by density flows. (1) Plane bed was formed under the condition of supercritical flow regime. In previous studies, plane bed has been known to be formed by subcritical unidirectional flows (ca. 0.8 in Froude number). However, this study implies that plane bed can also be formed by supercritical conditions with high Shields dimensionless stress (>0.4) and very high Froude number (> 4.0). This discovery may suggest that previous estimations of paleo-hydraulic conditions of parallel lamination in turbidites should be reconsidered. The previous experimental studies and data from high-speed camera suggest that the region of plane bed formation coincides with the region of the sheet flow developments. The particle transport in sheet flow (thick bedload layer) induces transform of profile of flow shear stress, which may be related with the formation of the plane bed. (2) This study also revealed geometrical characteristics of cyclic steps. Cyclic step is a type of bedform that is frequently observed in flanks of submarine levees. This study proved that cyclic steps of density flows show different geometry to those formed by open channel flows. Cyclic steps formed by open channel flows have generally asymmetrical geometry in which lee side is short, whereas cyclic steps formed by density flows are relatively symmetrical and varies their morphology remarkably depending on flow conditions.

  4. Large Scale Density Estimation of Blue and Fin Whales: Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope

  5. Towards Determining the Optimal Density of Groundwater Observation Networks under Uncertainty

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Kaleris, Vassilios; Kokosi, Angeliki; Mamounakis, Georgios

    2016-04-01

    Time series of groundwater level constitute one of the main sources of information when studying the availability of ground water reserves, at a regional level, under changing climatic conditions. To that extent, one needs groundwater observation networks that can provide sufficient information to estimate the hydraulic head at unobserved locations. The density of such networks is largely influenced by the structure of the aquifer, and in particular by the spatial distribution of hydraulic conductivity (i.e. layering), dependencies in the transition rates between different geologic formations, juxtapositional tendencies, etc. In this work, we: 1) use the concept of transition probabilities embedded in a Markov chain setting to conditionally simulate synthetic aquifer structures representative of geologic formations commonly found in the literature (see e.g. Hoeksema and Kitanidis, 1985), and 2) study how the density of observation wells affects the estimation accuracy of hydraulic heads at unobserved locations. The obtained results are promising, pointing towards the direction of establishing design criteria based on the statistical structure of the aquifer, such as the level of dependence in the transition rates of observed lithologies. Reference: Hoeksema, R.J. and P.K. Kitanidis (1985) Analysis of spatial structure of properties of selected aquifers, Water Resources Research, 21(4), 563-572. Acknowledgments: This work is sponsored by the Onassis Foundation under the "Special Grant and Support Program for Scholars' Association Members".

  6. Nonlinear system theory: another look at dependence.

    PubMed

    Wu, Wei Biao

    2005-10-04

    Based on the nonlinear system theory, we introduce previously undescribed dependence measures for stationary causal processes. Our physical and predictive dependence measures quantify the degree of dependence of outputs on inputs in physical systems. The proposed dependence measures provide a natural framework for a limit theory for stationary processes. In particular, under conditions with quite simple forms, we present limit theorems for partial sums, empirical processes, and kernel density estimates. The conditions are mild and easily verifiable because they are directly related to the data-generating mechanisms.

  7. A Posteriori Quantification of Rate-Controlling Effects from High-Intensity Turbulence-Flame Interactions Using 4D Measurements

    DTIC Science & Technology

    2016-11-22

    Unclassified REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...compact at all conditions tested, as indicated by the overlap of OH and CH2O distributions. 5. We developed analytical techniques for pseudo- Lagrangian ...condition in a constant density flow requires that the flow divergence is zero, ∇ · ~u = 0. Three smoothing schemes were examined, a moving average (i.e

  8. Demonstration of line transect methodologies to estimate urban gray squirrel density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hein, E.W.

    1997-11-01

    Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimatingmore » urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.« less

  9. Third generation snacks manufactured from orange by-products: physicochemical and nutritional characterization.

    PubMed

    Tovar-Jiménez, Xochitl; Caro-Corrales, José; Gómez-Aldapa, Carlos A; Zazueta-Morales, José; Limón-Valenzuela, Víctor; Castro-Rosas, Javier; Hernández-Ávila, Juan; Aguilar-Palazuelos, Ernesto

    2015-10-01

    A mixture of orange vesicle flour, commercial nixtamalized corn flour and potato starch was extruded using a Brabender Laboratory single screw extruder (2:1 L/D). The resulting pellets were expanded by microwaves. Expansion index, bulk density, penetration force, carotenoid content, and dietary fiber were measured for this third-generation snack and optimum production conditions were estimated. Response surface methodology was applied using a central composite rotatable experimental design to evaluate the effect of moisture content and extrusion temperature. Temperature mainly affected the expansion index, bulk density and penetration force, while carotenoids content was affected by moisture content. Surface overlap was used to identify optimum processing conditions: temperature: 128-130 °C; moisture content: 22-24 %. Insoluble dietary fiber decreased and soluble dietary fiber increased after extrusion.

  10. Uncertainty Management for Diagnostics and Prognostics of Batteries using Bayesian Techniques

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Goebel, kai

    2007-01-01

    Uncertainty management has always been the key hurdle faced by diagnostics and prognostics algorithms. A Bayesian treatment of this problem provides an elegant and theoretically sound approach to the modern Condition- Based Maintenance (CBM)/Prognostic Health Management (PHM) paradigm. The application of the Bayesian techniques to regression and classification in the form of Relevance Vector Machine (RVM), and to state estimation as in Particle Filters (PF), provides a powerful tool to integrate the diagnosis and prognosis of battery health. The RVM, which is a Bayesian treatment of the Support Vector Machine (SVM), is used for model identification, while the PF framework uses the learnt model, statistical estimates of noise and anticipated operational conditions to provide estimates of remaining useful life (RUL) in the form of a probability density function (PDF). This type of prognostics generates a significant value addition to the management of any operation involving electrical systems.

  11. On the nature of the variability in the Martian thermospheric mass density: Results from the Mars Global Surveyor Electron Reflectometer

    NASA Astrophysics Data System (ADS)

    England, S.; Lillis, R. J.

    2011-12-01

    Knowledge of Mars' thermospheric mass density (~120--200 km altitude) is important for understanding the current state and evolution of the Martian atmosphere and for spacecraft such as the upcoming MAVEN mission that will fly through this region every orbit. Global-scale atmospheric models have been shown thus far to do an inconsistent job of matching mass density observations at these altitudes, especially on the nightside. Thus there is a clear need for a data-driven estimate of the mass density in this region. Given the wide range of conditions and locations over which these must be defined, the dataset of thermospheric mass densities derived from energy and angular distributions of super-thermal electrons measured by the MAG/ER experiment on Mars Global Surveyor, spanning 4 full Martian years, is an extremely valuable resource that can be used to enhance our prediction of these densities beyond what is given by such global-scale models. Here we present an empirical model of the thermospheric density structure based on the MAG/ER dataset. Using this new model, we assess the global-scale response of the thermosphere to dust storms in the lower atmosphere and show that this varies with latitude. Further, we examine the short- and longer-term variability of the thermospheric density and show that it exhibits a complex behavior with latitude and season that is indicative of both atmospheric conditions at lower altitudes and possible lower atmosphere wave sources.

  12. A model of heat transfer in sapwood and implications for sap flux density measurements using thermal dissipation probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wullschleger, Stan D; Childs, Kenneth W; King, Anthony Wayne

    2011-01-01

    A variety of thermal approaches are used to estimate sap flux density in stems of woody plants. Models have proven valuable tools for interpreting the behavior of heat pulse, heat balance, and heat field deformation techniques, but have seldom been used to describe heat transfer dynamics for the heat dissipation method. Therefore, to better understand the behavior of heat dissipation probes, a model was developed that takes into account the thermal properties of wood, the physical dimensions and thermal characteristics of the probes, and the conductive and convective heat transfer that occurs due to water flow in the sapwood. Probesmore » were simulated as aluminum tubes 20 mm in length and 2 mm in diameter, whereas sapwood, heartwood, and bark each had a density and water fraction that determined their thermal properties. Base simulations assumed a constant sap flux density with sapwood depth and no wounding or physical disruption of xylem beyond the 2 mm diameter hole drilled for probe installation. Simulations across a range of sap flux densities showed that the dimensionless quantity k defined as ( Tm T)/ T where Tm is the temperature differential ( T) between the heated and unheated probe under zero flow conditions was dependent on the thermal conductivity of the sapwood. The relationship between sap flux density and k was also sensitive to radial gradients in sap flux density and to xylem disruption near the probe. Monte Carlo analysis in which 1000 simulations were conducted while simultaneously varying thermal conductivity and wound diameter revealed that sap flux density and k showed considerable departure from the original calibration equation used with this technique. The departure was greatest for abrupt patterns of radial variation typical of ring-porous species. Depending on the specific combination of thermal conductivity and wound diameter, use of the original calibration equation resulted in an 81% under- to 48% over-estimation of sap flux density at modest flux rates. Future studies should verify these simulations and assess their utility in estimating sap flux density for this widely used technique.« less

  13. Effects of LiDAR point density and landscape context on estimates of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, Kunwar K.; Chen, Gang; McCarter, James B.; Meentemeyer, Ross K.

    2015-03-01

    Light Detection and Ranging (LiDAR) data is being increasingly used as an effective alternative to conventional optical remote sensing to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and improved data accuracies accompanied by challenges for procuring and processing voluminous LiDAR data for large-area assessments. Reducing point density lowers data acquisition costs and overcomes computational challenges for large-area forest assessments. However, how does lower point density impact the accuracy of biomass estimation in forests containing a great level of anthropogenic disturbance? We evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing region of Charlotte, North Carolina, USA. We used multiple linear regression to establish a statistical relationship between field-measured biomass and predictor variables derived from LiDAR data with varying densities. We compared the estimation accuracies between a general Urban Forest type and three Forest Type models (evergreen, deciduous, and mixed) and quantified the degree to which landscape context influenced biomass estimation. The explained biomass variance of the Urban Forest model, using adjusted R2, was consistent across the reduced point densities, with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models at the representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, highlighting a distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest assessment without compromising the accuracy of biomass estimates, and these estimates can be further improved using development density.

  14. Ant-inspired density estimation via random walks

    PubMed Central

    Musco, Cameron; Su, Hsin-Hao

    2017-01-01

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146

  15. Log sampling methods and software for stand and landscape analyses.

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  16. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Treesearch

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  17. Multivariate Epi-splines and Evolving Function Identification Problems

    DTIC Science & Technology

    2015-04-15

    such extrinsic information as well as observed function and subgradient values often evolve in applications, we establish conditions under which the...previous study [30] dealt with compact intervals of IR. Splines are intimately tied to optimization problems through their variational theory pioneered...approxima- tion. Motivated by applications in curve fitting, regression, probability density estimation, variogram computation, financial curve construction

  18. Estimating juvenile Chinook salmon (Oncorhynchus tshawytscha) abundance from beach seine data collected in the Sacramento–San Joaquin Delta and San Francisco Bay, California

    USGS Publications Warehouse

    Perry, Russell W.; Kirsch, Joseph E.; Hendrix, A. Noble

    2016-06-17

    Resource managers rely on abundance or density metrics derived from beach seine surveys to make vital decisions that affect fish population dynamics and assemblage structure. However, abundance and density metrics may be biased by imperfect capture and lack of geographic closure during sampling. Currently, there is considerable uncertainty about the capture efficiency of juvenile Chinook salmon (Oncorhynchus tshawytscha) by beach seines. Heterogeneity in capture can occur through unrealistic assumptions of closure and from variation in the probability of capture caused by environmental conditions. We evaluated the assumptions of closure and the influence of environmental conditions on capture efficiency and abundance estimates of Chinook salmon from beach seining within the Sacramento–San Joaquin Delta and the San Francisco Bay. Beach seine capture efficiency was measured using a stratified random sampling design combined with open and closed replicate depletion sampling. A total of 56 samples were collected during the spring of 2014. To assess variability in capture probability and the absolute abundance of juvenile Chinook salmon, beach seine capture efficiency data were fitted to the paired depletion design using modified N-mixture models. These models allowed us to explicitly test the closure assumption and estimate environmental effects on the probability of capture. We determined that our updated method allowing for lack of closure between depletion samples drastically outperformed traditional data analysis that assumes closure among replicate samples. The best-fit model (lowest-valued Akaike Information Criterion model) included the probability of fish being available for capture (relaxed closure assumption), capture probability modeled as a function of water velocity and percent coverage of fine sediment, and abundance modeled as a function of sample area, temperature, and water velocity. Given that beach seining is a ubiquitous sampling technique for many species, our improved sampling design and analysis could provide significant improvements in density and abundance estimation.

  19. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  20. Stochastic Model of Seasonal Runoff Forecasts

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, Roman; Watada, Leslie M.

    1986-03-01

    Each year the National Weather Service and the Soil Conservation Service issue a monthly sequence of five (or six) categorical forecasts of the seasonal snowmelt runoff volume. To describe uncertainties in these forecasts for the purposes of optimal decision making, a stochastic model is formulated. It is a discrete-time, finite, continuous-space, nonstationary Markov process. Posterior densities of the actual runoff conditional upon a forecast, and transition densities of forecasts are obtained from a Bayesian information processor. Parametric densities are derived for the process with a normal prior density of the runoff and a linear model of the forecast error. The structure of the model and the estimation procedure are motivated by analyses of forecast records from five stations in the Snake River basin, from the period 1971-1983. The advantages of supplementing the current forecasting scheme with a Bayesian analysis are discussed.

  1. Estimated ground-water discharge by evapotranspiration from Death Valley, California, 1997-2001

    USGS Publications Warehouse

    DeMeo, Guy A.; Laczniak, Randell J.; Boyd, Robert A.; Smith, J. LaRue; Nylund, Walter E.

    2003-01-01

    The U.S. Geological Survey, in cooperation with the National Park Service and Inyo County, Calif., collected field data from 1997 through 2001 to accurately estimate the amount of annual ground-water discharge by evapotranspiration (ET) from the floor of Death Valley, California. Multispectral satellite-imagery and National Wetlands Inventory data are used to delineate evaporative ground-water discharge areas on the Death Valley floor. These areas are divided into five general units where ground-water discharge from ET is considered to be significant. Based upon similarities in soil type, soil moisture, vegetation type, and vegetation density; the ET units are salt-encrusted playa (21,287 acres), bare-soil playa (75,922 acres), low-density vegetation (6,625 acres), moderate-density vegetation (5,019 acres), and high-density vegetation (1,522 acres). Annual ET was computed for ET units with micrometeorological data which were continuously measured at six instrumented sites. Total ET was determined at sites that were chosen for their soil- and vegetated-surface conditions, which include salt-encrusted playa (extensive salt encrustation) 0.17 feet per year, bare-soil playa (silt and salt encrustation) 0.21 feet per year, pickleweed (pickleweed plants, low-density vegetation) 0.60 feet per year, Eagle Borax (arrowweed plants and salt grass, moderate-density vegetation) 1.99 feet per year, Mesquite Flat (mesquite trees, high-density vegetation) 2.86 feet per year, and Mesquite Flat mixed grasses (mixed meadow grasses, high-density vegetation) 3.90 feet per year. Precipitation, flooding, and ground-water discharge satisfy ET demand in Death Valley. Ground-water discharge is estimated by deducting local precipitation and flooding from cumulative ET estimates. Discharge rates from ET units were not estimated directly because the range of vegetation units far exceeded the five specific vegetation units that were measured. The rate of annual ground-water discharge by ET for each ET unit was determined by fitting the annual ground-water ET for each site with the variability in vegetation density in each ET unit. The ET rate representing the midpoint of each ET unit was used as the representative value. The rate of annual ground-water ET for the playa sites did not require scaling in this manner. Annual ground-water discharge by ET was determined for all five ET units: salt-encrusted playa (0.13 foot), bare-soil playa (0.15 foot), low-density vegetation (1.0 foot), moderate-density vegetation (2.0 feet), and high-density vegetation (3.0 feet), and an area of vegetation or bare soil not contributing to ground-water discharge unclassified (0.0 foot). The total ground-water discharge from ET for the Death Valley floor is about 35,000 acre-feet and was computed by summing the products of the area of each ET unit multiplied by a corresponding ET rate for each unit.

  2. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  3. Improving chemical species tomography of turbulent flows using covariance estimation.

    PubMed

    Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J

    2017-05-01

    Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.

  4. An estimation method of the direct benefit of a waterlogging control project applicable to the changing environment

    NASA Astrophysics Data System (ADS)

    Zengmei, L.; Guanghua, Q.; Zishen, C.

    2015-05-01

    The direct benefit of a waterlogging control project is reflected by the reduction or avoidance of waterlogging loss. Before and after the construction of a waterlogging control project, the disaster-inducing environment in the waterlogging-prone zone is generally different. In addition, the category, quantity and spatial distribution of the disaster-bearing bodies are also changed more or less. Therefore, under the changing environment, the direct benefit of a waterlogging control project should be the reduction of waterlogging losses compared to conditions with no control project. Moreover, the waterlogging losses with or without the project should be the mathematical expectations of the waterlogging losses when rainstorms of all frequencies meet various water levels in the drainage-accepting zone. So an estimation model of the direct benefit of waterlogging control is proposed. Firstly, on the basis of a Copula function, the joint distribution of the rainstorms and the water levels are established, so as to obtain their joint probability density function. Secondly, according to the two-dimensional joint probability density distribution, the dimensional domain of integration is determined, which is then divided into small domains so as to calculate the probability for each of the small domains and the difference between the average waterlogging loss with and without a waterlogging control project, called the regional benefit of waterlogging control project, under the condition that rainstorms in the waterlogging-prone zone meet the water level in the drainage-accepting zone. Finally, it calculates the weighted mean of the project benefit of all small domains, with probability as the weight, and gets the benefit of the waterlogging control project. Taking the estimation of benefit of a waterlogging control project in Yangshan County, Guangdong Province, as an example, the paper briefly explains the procedures in waterlogging control project benefit estimation. The results show that the waterlogging control benefit estimation model constructed is applicable to the changing conditions that occur in both the disaster-inducing environment of the waterlogging-prone zone and disaster-bearing bodies, considering all conditions when rainstorms of all frequencies meet different water levels in the drainage-accepting zone. Thus, the estimation method of waterlogging control benefit can reflect the actual situation more objectively, and offer a scientific basis for rational decision-making for waterlogging control projects.

  5. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  6. Comparison of Drive Counts and Mark-Resight As Methods of Population Size Estimation of Highly Dense Sika Deer (Cervus nippon) Populations.

    PubMed

    Takeshita, Kazutaka; Ikeda, Takashi; Takahashi, Hiroshi; Yoshida, Tsuyoshi; Igota, Hiromasa; Matsuura, Yukiko; Kaji, Koichi

    2016-01-01

    Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR) methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2) on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered.

  7. [Gypsy moth Lymantria dispar L. in the South Urals: Patterns in population dynamics and modelling].

    PubMed

    Soukhovolsky, V G; Ponomarev, V I; Sokolov, G I; Tarasova, O V; Krasnoperova, P A

    2015-01-01

    The analysis is conducted on population dynamics of gypsy moth from different habitats of the South Urals. The pattern of cyclic changes in population density is examined, the assessment of temporal conjugation in time series of gypsy moth population dynamics from separate habitats of the South Urals is carried out, the relationships between population density and weather conditions are studied. Based on the results obtained, a statistical model of gypsy moth population dynamics in the South Urals is designed, and estimations are given of regulatory and modifying factors effects on the population dynamics.

  8. Different Indices of Fetal Growth Predict Bone Size and Volumetric Density at 4 Years of Age

    PubMed Central

    Harvey, Nicholas C; Mahon, Pamela A; Robinson, Sian M; Nisbet, Corrine E; Javaid, M Kassim; Crozier, Sarah R; Inskip, Hazel M; Godfrey, Keith M; Arden, Nigel K; Dennison, Elaine M; Cooper, Cyrus

    2011-01-01

    We have demonstrated previously that higher birth weight is associated with greater peak and later-life bone mineral content and that maternal body build, diet, and lifestyle influence prenatal bone mineral accrual. To examine prenatal influences on bone health further, we related ultrasound measures of fetal growth to childhood bone size and density. We derived Z-scores for fetal femur length and abdominal circumference and conditional growth velocity from 19 to 34 weeks’ gestation from ultrasound measurements in participants in the Southampton Women’s Survey. A total of 380 of the offspring underwent dual-energy X-ray absorptiometry (DXA) at age 4 years [whole body minus head bone area (BA), bone mineral content (BMC), areal bone mineral density (aBMD), and estimated volumetric BMD (vBMD)]. Volumetric bone mineral density was estimated using BMC adjusted for BA, height, and weight. A higher velocity of 19- to 34-week fetal femur growth was strongly associated with greater childhood skeletal size (BA: r = 0.30, p < .0001) but not with volumetric density (vBMD: r = 0.03, p = .51). Conversely, a higher velocity of 19- to 34-week fetal abdominal growth was associated with greater childhood volumetric density (vBMD: r = 0.15, p = .004) but not with skeletal size (BA: r = 0.06, p = .21). Both fetal measurements were positively associated with BMC and aBMD, indices influenced by both size and density. The velocity of fetal femur length growth from 19 to 34 weeks’ gestation predicted childhood skeletal size at age 4 years, whereas the velocity of abdominal growth (a measure of liver volume and adiposity) predicted volumetric density. These results suggest a discordance between influences on skeletal size and volumetric density. PMID:20437610

  9. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  10. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  11. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  12. Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.

    PubMed

    Borah, Deva K; Voelz, David G

    2007-08-10

    The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.

  13. Demographics and density estimates of two three-toed box turtle (Terrapene carolina triunguis) populations within forest and restored prairie sites in central Missouri.

    PubMed

    O'Connor, Kelly M; Rittenhouse, Chadwick D; Millspaugh, Joshua J; Rittenhouse, Tracy A G

    2015-01-01

    Box turtles (Terrapene carolina) are widely distributed but vulnerable to population decline across their range. Using distance sampling, morphometric data, and an index of carapace damage, we surveyed three-toed box turtles (Terrapene carolina triunguis) at 2 sites in central Missouri, and compared differences in detection probabilities when transects were walked by one or two observers. Our estimated turtle densities within forested cover was less at the Thomas S. Baskett Wildlife Research and Education Center, a site dominated by eastern hardwood forest (d = 1.85 turtles/ha, 95% CI [1.13, 3.03]) than at the Prairie Fork Conservation Area, a site containing a mix of open field and hardwood forest (d = 4.14 turtles/ha, 95% CI [1.99, 8.62]). Turtles at Baskett were significantly older and larger than turtles at Prairie Fork. Damage to the carapace did not differ significantly between the 2 populations despite the more prevalent habitat management including mowing and prescribed fire at Prairie Fork. We achieved improved estimates of density using two rather than one observer at Prairie Fork, but negligible differences in density estimates between the two methods at Baskett. Error associated with probability of detection decreased at both sites with the addition of a second observer. We provide demographic data on three-toed box turtles that suggest the use of a range of habitat conditions by three-toed box turtles. This case study suggests that habitat management practices and their impacts on habitat composition may be a cause of the differences observed in our focal populations of turtles.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  15. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data

    PubMed Central

    Broekhuis, Femke; Gopalaswamy, Arjun M.

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed ‘hotspots’ of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species. PMID:27135614

  16. Counting Cats: Spatially Explicit Population Estimates of Cheetah (Acinonyx jubatus) Using Unstructured Sampling Data.

    PubMed

    Broekhuis, Femke; Gopalaswamy, Arjun M

    2016-01-01

    Many ecological theories and species conservation programmes rely on accurate estimates of population density. Accurate density estimation, especially for species facing rapid declines, requires the application of rigorous field and analytical methods. However, obtaining accurate density estimates of carnivores can be challenging as carnivores naturally exist at relatively low densities and are often elusive and wide-ranging. In this study, we employ an unstructured spatial sampling field design along with a Bayesian sex-specific spatially explicit capture-recapture (SECR) analysis, to provide the first rigorous population density estimates of cheetahs (Acinonyx jubatus) in the Maasai Mara, Kenya. We estimate adult cheetah density to be between 1.28 ± 0.315 and 1.34 ± 0.337 individuals/100km2 across four candidate models specified in our analysis. Our spatially explicit approach revealed 'hotspots' of cheetah density, highlighting that cheetah are distributed heterogeneously across the landscape. The SECR models incorporated a movement range parameter which indicated that male cheetah moved four times as much as females, possibly because female movement was restricted by their reproductive status and/or the spatial distribution of prey. We show that SECR can be used for spatially unstructured data to successfully characterise the spatial distribution of a low density species and also estimate population density when sample size is small. Our sampling and modelling framework will help determine spatial and temporal variation in cheetah densities, providing a foundation for their conservation and management. Based on our results we encourage other researchers to adopt a similar approach in estimating densities of individually recognisable species.

  17. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse

  18. Estimating Small-Body Gravity Field from Shape Model and Navigation Data

    NASA Technical Reports Server (NTRS)

    Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam

    2008-01-01

    This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.

  19. Impact of density information on Rayleigh surface wave inversion results

    NASA Astrophysics Data System (ADS)

    Ivanov, Julian; Tsoflias, Georgios; Miller, Richard D.; Peterie, Shelby; Morton, Sarah; Xia, Jianghai

    2016-12-01

    We assessed the impact of density on the estimation of inverted shear-wave velocity (Vs) using the multi-channel analysis of surface waves (MASW) method. We considered the forward modeling theory, evaluated model sensitivity, and tested the effect of density information on the inversion of seismic data acquired in the Arctic. Theoretical review, numerical modeling and inversion of modeled and real data indicated that the density ratios between layers, not the actual density values, impact the determination of surface-wave phase velocities. Application on real data compared surface-wave inversion results using: a) constant density, the most common approach in practice, b) indirect density estimates derived from refraction compressional-wave velocity observations, and c) from direct density measurements in a borehole. The use of indirect density estimates reduced the final shear-wave velocity (Vs) results typically by 6-7% and the use of densities from a borehole reduced the final Vs estimates by 10-11% compared to those from assumed constant density. In addition to the improved absolute Vs accuracy, the resulting overall Vs changes were unevenly distributed laterally when viewed on a 2-D section leading to an overall Vs model structure that was more representative of the subsurface environment. It was observed that the use of constant density instead of increasing density with depth not only can lead to Vs overestimation but it can also create inaccurate model structures, such as a low-velocity layer. Thus, optimal Vs estimations can be best achieved using field estimates of subsurface density ratios.

  20. Integrating K-means Clustering with Kernel Density Estimation for the Development of a Conditional Weather Generation Downscaling Model

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Ho, C.; Chang, L.

    2011-12-01

    In previous decades, the climate change caused by global warming increases the occurrence frequency of extreme hydrological events. Water supply shortages caused by extreme events create great challenges for water resource management. To evaluate future climate variations, general circulation models (GCMs) are the most wildly known tools which shows possible weather conditions under pre-defined CO2 emission scenarios announced by IPCC. Because the study area of GCMs is the entire earth, the grid sizes of GCMs are much larger than the basin scale. To overcome the gap, a statistic downscaling technique can transform the regional scale weather factors into basin scale precipitations. The statistic downscaling technique can be divided into three categories include transfer function, weather generator and weather type. The first two categories describe the relationships between the weather factors and precipitations respectively based on deterministic algorithms, such as linear or nonlinear regression and ANN, and stochastic approaches, such as Markov chain theory and statistical distributions. In the weather type, the method has ability to cluster weather factors, which are high dimensional and continuous variables, into weather types, which are limited number of discrete states. In this study, the proposed downscaling model integrates the weather type, using the K-means clustering algorithm, and the weather generator, using the kernel density estimation. The study area is Shihmen basin in northern of Taiwan. In this study, the research process contains two steps, a calibration step and a synthesis step. Three sub-steps were used in the calibration step. First, weather factors, such as pressures, humidities and wind speeds, obtained from NCEP and the precipitations observed from rainfall stations were collected for downscaling. Second, the K-means clustering grouped the weather factors into four weather types. Third, the Markov chain transition matrixes and the conditional probability density function (PDF) of precipitations approximated by the kernel density estimation are calculated respectively for each weather types. In the synthesis step, 100 patterns of synthesis data are generated. First, the weather type of the n-th day are determined by the results of K-means clustering. The associated transition matrix and PDF of the weather type were also determined for the usage of the next sub-step in the synthesis process. Second, the precipitation condition, dry or wet, can be synthesized basing on the transition matrix. If the synthesized condition is dry, the quantity of precipitation is zero; otherwise, the quantity should be further determined in the third sub-step. Third, the quantity of the synthesized precipitation is assigned as the random variable of the PDF defined above. The synthesis efficiency compares the gap of the monthly mean curves and monthly standard deviation curves between the historical precipitation data and the 100 patterns of synthesis data.

  1. Use of spatial capture–recapture to estimate density of Andean bears in northern Ecuador

    USGS Publications Warehouse

    Molina, Santiago; Fuller, Angela K.; Morin, Dana J.; Royle, J. Andrew

    2017-01-01

    The Andean bear (Tremarctos ornatus) is the only extant species of bear in South America and is considered threatened across its range and endangered in Ecuador. Habitat loss and fragmentation is considered a critical threat to the species, and there is a lack of knowledge regarding its distribution and abundance. The species is thought to occur at low densities, making field studies designed to estimate abundance or density challenging. We conducted a pilot camera-trap study to estimate Andean bear density in a recently identified population of Andean bears northwest of Quito, Ecuador, during 2012. We compared 12 candidate spatial capture–recapture models including covariates on encounter probability and density and estimated a density of 7.45 bears/100 km2 within the region. In addition, we estimated that approximately 40 bears used a recently named Andean bear corridor established by the Secretary of Environment, and we produced a density map for this area. Use of a rub-post with vanilla scent attractant allowed us to capture numerous photographs for each event, improving our ability to identify individual bears by unique facial markings. This study provides the first empirically derived density estimate for Andean bears in Ecuador and should provide direction for future landscape-scale studies interested in conservation initiatives requiring spatially explicit estimates of density.

  2. A Design Study of Onboard Navigation and Guidance During Aerocapture at Mars. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Fuhry, Douglas Paul

    1988-01-01

    The navigation and guidance of a high lift-to-drag ratio sample return vehicle during aerocapture at Mars are investigated. Emphasis is placed on integrated systems design, with guidance algorithm synthesis and analysis based on vehicle state and atmospheric density uncertainty estimates provided by the navigation system. The latter utilizes a Kalman filter for state vector estimation, with useful update information obtained through radar altimeter measurements and density altitude measurements based on IMU-measured drag acceleration. A three-phase guidance algorithm, featuring constant bank numeric predictor/corrector atmospheric capture and exit phases and an extended constant altitude cruise phase, is developed to provide controlled capture and depletion of orbital energy, orbital plane control, and exit apoapsis control. Integrated navigation and guidance systems performance are analyzed using a four degree-of-freedom computer simulation. The simulation environment includes an atmospheric density model with spatially correlated perturbations to provide realistic variations over the vehicle trajectory. Navigation filter initial conditions for the analysis are based on planetary approach optical navigation results. Results from a selection of test cases are presented to give insight into systems performance.

  3. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    PubMed

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Separation of components from a scale mixture of Gaussian white noises

    NASA Astrophysics Data System (ADS)

    Vamoş, Călin; Crăciun, Maria

    2010-05-01

    The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.

  5. Near infrared study of water-benzene mixtures at high temperatures and pressures.

    PubMed

    Jin, Yusuke; Ikawa, Shun-Ichi

    2004-08-08

    Near-infrared absorption of water-benzene mixtures has been measured at temperatures and pressures in the ranges of 473-673 K and 100-400 bar, respectively. Concentrations of water and benzene in the water-rich phase of the mixtures were obtained from the integrated absorption intensities of the OH stretching overtone transition of water and the CH stretching overtone transition of benzene, respectively. Using these concentrations, the densities of the water-rich phase were estimated and compared with the average densities before mixing, which were calculated from literature densities of neat water and neat benzene. It is found that anomalously large volume expansion on the mixing occurs in the region enclosed by an extended line of the three-phase equilibrium curve and the one-phase critical curve of the mixtures, and the gas-liquid equilibrium curve of water. Furthermore, magnitude of the relative volume change increases with decreasing molar fraction of benzene in the present experimental range. It is suggested that dissolving a small amount of benzene in water induces a change in the fluid density from a liquidlike condition to a gaslike condition in the vicinity of the critical region.

  6. Calibrated tree priors for relaxed phylogenetics and divergence time estimation.

    PubMed

    Heled, Joseph; Drummond, Alexei J

    2012-01-01

    The use of fossil evidence to calibrate divergence time estimation has a long history. More recently, Bayesian Markov chain Monte Carlo has become the dominant method of divergence time estimation, and fossil evidence has been reinterpreted as the specification of prior distributions on the divergence times of calibration nodes. These so-called "soft calibrations" have become widely used but the statistical properties of calibrated tree priors in a Bayesian setting hashave not been carefully investigated. Here, we clarify that calibration densities, such as those defined in BEAST 1.5, do not represent the marginal prior distribution of the calibration node. We illustrate this with a number of analytical results on small trees. We also describe an alternative construction for a calibrated Yule prior on trees that allows direct specification of the marginal prior distribution of the calibrated divergence time, with or without the restriction of monophyly. This method requires the computation of the Yule prior conditional on the height of the divergence being calibrated. Unfortunately, a practical solution for multiple calibrations remains elusive. Our results suggest that direct estimation of the prior induced by specifying multiple calibration densities should be a prerequisite of any divergence time dating analysis.

  7. Deep sea animal density and size estimated using a Dual-frequency IDentification SONar (DIDSON) offshore the island of Hawaii

    NASA Astrophysics Data System (ADS)

    Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.

    2018-01-01

    Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.

  8. Improving Radar Snowfall Measurements Using a Video Disdrometer

    NASA Astrophysics Data System (ADS)

    Newman, A. J.; Kucera, P. A.

    2005-05-01

    A video disdrometer has been recently developed at NASA/Wallops Flight Facility in an effort to improve surface precipitation measurements. The recent upgrade of the UND C-band weather radar to dual-polarimetric capabilities along with the development of the UND Glacial Ridge intensive atmospheric observation site has presented a valuable opportunity to attempt to improve radar estimates of snowfall. The video disdrometer, referred to as the Rain Imaging System (RIS), has been deployed at the Glacial Ridge site for most of the 2004-2005 winter season to measure size distributions, precipitation rate, and density estimates of snowfall. The RIS uses CCD grayscale video camera with a zoom lens to observe hydrometers in a sample volume located 2 meters from end of the lens and approximately 1.5 meters away from an independent light source. The design of the RIS may eliminate sampling errors from wind flow around the instrument. The RIS has proven its ability to operate continuously in the adverse conditions often observed in the Northern Plains. The RIS is able to provide crystal habit information, variability of particle size distributions for the lifecycle of the storm, snowfall rates, and estimates of snow density. This information, in conjunction with hand measurements of density and crystal habit, will be used to build a database for comparisons with polarimetric data from the UND radar. This database will serve as the basis for improving snowfall estimates using polarimetric radar observations. Preliminary results from several case studies will be presented.

  9. A Pairwise Naïve Bayes Approach to Bayesian Classification.

    PubMed

    Asafu-Adjei, Josephine K; Betensky, Rebecca A

    2015-10-01

    Despite the relatively high accuracy of the naïve Bayes (NB) classifier, there may be several instances where it is not optimal, i.e. does not have the same classification performance as the Bayes classifier utilizing the joint distribution of the examined attributes. However, the Bayes classifier can be computationally intractable due to its required knowledge of the joint distribution. Therefore, we introduce a "pairwise naïve" Bayes (PNB) classifier that incorporates all pairwise relationships among the examined attributes, but does not require specification of the joint distribution. In this paper, we first describe the necessary and sufficient conditions under which the PNB classifier is optimal. We then discuss sufficient conditions for which the PNB classifier, and not NB, is optimal for normal attributes. Through simulation and actual studies, we evaluate the performance of our proposed classifier relative to the Bayes and NB classifiers, along with the HNB, AODE, LBR and TAN classifiers, using normal density and empirical estimation methods. Our applications show that the PNB classifier using normal density estimation yields the highest accuracy for data sets containing continuous attributes. We conclude that it offers a useful compromise between the Bayes and NB classifiers.

  10. Heating power at the substrate, electron temperature, and electron density in 2.45 GHz low-pressure microwave plasma

    NASA Astrophysics Data System (ADS)

    Kais, A.; Lo, J.; Thérèse, L.; Guillot, Ph.

    2018-01-01

    To control the temperature during a plasma treatment, an understanding of the link between the plasma parameters and the fundamental process responsible for the heating is required. In this work, the power supplied by the plasma onto the surface of a glass substrate is measured using the calorimetric method. It has been shown that the powers deposited by ions and electrons, and their recombination at the surface are the main contributions to the heating power. Each contribution is estimated according to the theory commonly used in the literature. Using the corona balance, the Modified Boltzmann Plot (MBP) is employed to determine the electron temperature. A correlation between the power deposited by the plasma and the results of the MBP has been established. This correlation has been used to estimate the electron number density independent of the Langmuir probe in considered conditions.

  11. Contributions of Cu-rich clusters, dislocation loops and nanovoids to the irradiation-induced hardening of Cu-bearing low-Ni reactor pressure vessel steels

    NASA Astrophysics Data System (ADS)

    Bergner, F.; Gillemot, F.; Hernández-Mayoral, M.; Serrano, M.; Török, G.; Ulbricht, A.; Altstadt, E.

    2015-06-01

    Dislocation loops, nanovoids and Cu-rich clusters (CRPs) are known to represent obstacles for dislocation glide in neutron-irradiated reactor pressure vessel (RPV) steels, but a consistent experimental determination of the respective obstacle strengths is still missing. A set of Cu-bearing low-Ni RPV steels and model alloys was characterized by means of SANS and TEM in order to specify mean size and number density of loops, nanovoids and CRPs. The obstacle strengths of these families were estimated by solving an over-determined set of linear equations. We have found that nanovoids are stronger than loops and loops are stronger than CRPs. Nevertheless, CRPs contribute most to irradiation hardening because of their high number density. Nanovoids were only observed for neutron fluences beyond typical end-of-life conditions of RPVs. The estimates of the obstacle strength are critically compared with reported literature data.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobson, Paul T; Hagerman, George; Scott, George

    This project estimates the naturally available and technically recoverable U.S. wave energy resources, using a 51-month Wavewatch III hindcast database developed especially for this study by National Oceanographic and Atmospheric Administration's (NOAA's) National Centers for Environmental Prediction. For total resource estimation, wave power density in terms of kilowatts per meter is aggregated across a unit diameter circle. This approach is fully consistent with accepted global practice and includes the resource made available by the lateral transfer of wave energy along wave crests, which enables wave diffraction to substantially reestablish wave power densities within a few kilometers of a linear array,more » even for fixed terminator devices. The total available wave energy resource along the U.S. continental shelf edge, based on accumulating unit circle wave power densities, is estimated to be 2,640 TWh/yr, broken down as follows: 590 TWh/yr for the West Coast, 240 TWh/yr for the East Coast, 80 TWh/yr for the Gulf of Mexico, 1570 TWh/yr for Alaska, 130 TWh/yr for Hawaii, and 30 TWh/yr for Puerto Rico. The total recoverable wave energy resource, as constrained by an array capacity packing density of 15 megawatts per kilometer of coastline, with a 100-fold operating range between threshold and maximum operating conditions in terms of input wave power density available to such arrays, yields a total recoverable resource along the U.S. continental shelf edge of 1,170 TWh/yr, broken down as follows: 250 TWh/yr for the West Coast, 160 TWh/yr for the East Coast, 60 TWh/yr for the Gulf of Mexico, 620 TWh/yr for Alaska, 80 TWh/yr for Hawaii, and 20 TWh/yr for Puerto Rico.« less

  13. A geostatistical state-space model of animal densities for stream networks.

    PubMed

    Hocking, Daniel J; Thorson, James T; O'Neil, Kyle; Letcher, Benjamin H

    2018-06-21

    Population dynamics are often correlated in space and time due to correlations in environmental drivers as well as synchrony induced by individual dispersal. Many statistical analyses of populations ignore potential autocorrelations and assume that survey methods (distance and time between samples) eliminate these correlations, allowing samples to be treated independently. If these assumptions are incorrect, results and therefore inference may be biased and uncertainty under-estimated. We developed a novel statistical method to account for spatio-temporal correlations within dendritic stream networks, while accounting for imperfect detection in the surveys. Through simulations, we found this model decreased predictive error relative to standard statistical methods when data were spatially correlated based on stream distance and performed similarly when data were not correlated. We found that increasing the number of years surveyed substantially improved the model accuracy when estimating spatial and temporal correlation coefficients, especially from 10 to 15 years. Increasing the number of survey sites within the network improved the performance of the non-spatial model but only marginally improved the density estimates in the spatio-temporal model. We applied this model to Brook Trout data from the West Susquehanna Watershed in Pennsylvania collected over 34 years from 1981 - 2014. We found the model including temporal and spatio-temporal autocorrelation best described young-of-the-year (YOY) and adult density patterns. YOY densities were positively related to forest cover and negatively related to spring temperatures with low temporal autocorrelation and moderately-high spatio-temporal correlation. Adult densities were less strongly affected by climatic conditions and less temporally variable than YOY but with similar spatio-temporal correlation and higher temporal autocorrelation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Comparison of estimation accuracy of body density between different hydrostatics weighing methods without head submersion.

    PubMed

    Demura, Shinichi; Sato, Susumu; Nakada, Masakatsu; Minami, Masaki; Kitabayashi, Tamotsu

    2003-07-01

    This study compared the accuracy of body density (Db) estimation methods using hydrostatic weighing without complete head submersion (HW(withoutHS)) of Donnelly et al. (1988) and Donnelly and Sintek (1984) as referenced to Goldman and Buskirk's approach (1961). Donnelly et al.'s method estimates Db from a regression equation using HW(withoutHS), moreover, Donnelly and Sintek's method estimates it from HW(withoutHS) and head anthropometric variables. Fifteen Japanese males (173.8+/-4.5 cm, 63.6+/-5.4 kg, 21.2+/-2.8 years) and fifteen females (161.4+/-5.4 cm, 53.8+/-4.8 kg, 21.0+/-1.4 years) participated in this study. All the subjects were measured for head length, width and HWs under the two conditions of with and without head submersion. In order to examine the consistency of estimation values of Db, the correlation coefficients between the estimation values and the reference (Goldman and Buskirk, 1961) were calculated. The standard errors of estimation (SEE) were calculated by regression analysis using a reference value as a dependent variable and estimation values as independent variables. In addition, the systematic errors of two estimation methods were investigated by the Bland-Altman technique (Bland and Altman, 1986). In the estimation, Donnelly and Sintek's equation showed a high relationship with the reference (r=0.960, p<0.01), but had more differences from the reference compared with Donnelly et al.'s equation. Further studies are needed to develop new prediction equations for Japanese considering sex and individual differences in head anthropometry.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, Sean Campbell; Ao, Tommy; Davis, Jean-Paul

    The CHEDS researchers are engaged in a collaborative research project to study the properties of iron and iron alloys under Earth’s core conditions. The Earth’s core, inner and outer, is composed primarily of iron, thus studying iron and iron alloys at high pressure and temperature conditions will give the best estimate of its properties. Also, comparing studies of iron alloys with known properties of the core can constrain the potential light element compositions found within the core, such as fitting sound speeds and densities of iron alloys to established inner- Earth models. One of the lesser established properties of themore » core is the thermal conductivity, where current estimates vary by a factor of three. Therefore, one of the primary goals of this collaboration is to make relevant measurements to elucidate this conductivity.« less

  16. The inequality of water scarcity events: who is actually being affected?

    NASA Astrophysics Data System (ADS)

    Veldkamp, Ted I. E.; Wada, Yoshihide; Kummu, Matti; Aerts, Jeroen C. J. H.; Ward, Philip J.

    2015-04-01

    Over the past decades, changing hydro-climatic and socioeconomic conditions increased regional and global water scarcity problems. In the near future, projected changes in human water use and population growth - in combination with climate change - are expected to aggravate water scarcity conditions and its associated impacts on our society. Whilst a wide range of studies have modelled past and future regional and global patterns of change in population or land area impacted by water scarcity conditions, less attention is paid on who is actually affected and how vulnerable this share of the population is to water scarcity conditions. The actual impact of water scarcity events, however, not only depends on the numbers being affected, but merely on how sensitive this population is to water scarcity conditions, how quick and efficient governments can deal with the problems induced by water scarcity, and how many (financial and infrastructural) resources are available to cope with water scarce conditions. Only few studies have investigated the above mentioned interactions between societal composition and water scarcity conditions (e.g. by means of the social water scarcity index and the water poverty index) and, up to our knowledge, a comprehensive global analysis including different water scarcity indicators and multiple climate and socioeconomic scenarios is missing. To address this issue, we assess in this contribution the adaptive capacity of a society to water scarcity conditions, evaluate how this may be driven by different societal factors, and discuss how enhanced knowledge on this topic could be of interest for water managers in their design of adaptation strategies coping with water scarcity events. For that purpose, we couple spatial information on water scarcity conditions with different components from, among others, the Human Development Index and the Worldwide Governance Indicators, such as: the share of the population with an income below the poverty line; mean year of schooling; the ratio between urban and rural population; import and export rates; political stability; corruption; and government effectiveness. Moreover, we also take into account the accessibility of fresh water bodies and markets. Underlying water scarcity conditions were estimated as follows: (1) yearly water availability was calculated at 0.5° x 0.5° over the period 1971-2099 using daily discharge and run-off fields from the global hydrological model PCR-GLOBWB, forced with different climate change scenarios; (2) statistical methods were applied to fit probability density functions to time-series of yearly water availability and to estimate water availability for a number of return periods covering the current, 2030, and 2050 conditions; (3) water availability results were assembled with scenario estimates of water consumption and population density which resulted in a series of water scarcity estimates.

  17. Jumping the gap: the formation conditions and mass function of `pebble-pile' planetesimals

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-03-01

    In a turbulent proto-planetary disc, dust grains undergo large-density fluctuations and under the right circumstances, grain overdensities can collapse under self-gravity (forming a `pebble-pile' planetesimal). Using a simple model for fluctuations predicted in simulations, we estimate the rate of formation and mass function of self-gravitating planetesimal-mass bodies formed by this mechanism. This depends sensitively on the grain size, disc surface density, and turbulent Mach numbers. However, when it occurs, the resulting planetesimal mass function is broad and quasi-universal, with a slope dN/dM ∝ M-(1-2), spanning size/mass range ˜10-104 km (˜10-9-5 M⊕). Collapse to planetesimal through super-Earth masses is possible. The key condition is that grain density fluctuations reach large amplitudes on large scales, where gravitational instability proceeds most easily (collapse of small grains is suppressed by turbulence). This leads to a new criterion for `pebble-pile' formation: τs ≳ 0.05 ln (Q1/2/Zd)/ln (1 + 10 α1/4) ˜ 0.3 ψ(Q, Z, α) where τs = ts Ω is the dimensionless particle stopping time. In a minimum-mass solar nebula, this requires grains larger than a = (50, 1, 0.1) cm at r=(1, 30, 100) au}. This may easily occur beyond the ice line, but at small radii would depend on the existence of large boulders. Because density fluctuations depend strongly on τs (inversely proportional to disc surface density), lower density discs are more unstable. Conditions for pebble-pile formation also become more favourable around lower mass, cooler stars.

  18. Temporal variation in bird counts within a Hawaiian rainforest

    USGS Publications Warehouse

    Simon, John C.; Pratt, T.K.; Berlin, Kim E.; Kowalsky, James R.; Fancy, S.G.; Hatfield, J.S.

    2002-01-01

    We studied monthly and annual variation in density estimates of nine forest bird species along an elevational gradient in an east Maui rainforest. We conducted monthly variable circular-plot counts for 36 consecutive months along transects running downhill from timberline. Density estimates were compared by month, year, and station for all resident bird species with sizeable populations, including four native nectarivores, two native insectivores, a non-native insectivore, and two non-native generalists. We compared densities among three elevational strata and between breeding and nonbreeding seasons. All species showed significant differences in density estimates among months and years. Three native nectarivores had higher density estimates within their breeding season (December-May) and showed decreases during periods of low nectar production following the breeding season. All insectivore and generalist species except one had higher density estimates within their March-August breeding season. Density estimates also varied with elevation for all species, and for four species a seasonal shift in population was indicated. Our data show that the best time to conduct counts for native forest birds on Maui is January-February, when birds are breeding or preparing to breed, counts are typically high, variability in density estimates is low, and the likelihood for fair weather is best. Temporal variations in density estimates documented in our study site emphasize the need for consistent, well-researched survey regimens and for caution when drawing conclusions from, or basing management decisions on, survey data.

  19. Probabilistic prediction models for aggregate quarry siting

    USGS Publications Warehouse

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  20. Iterative initial condition reconstruction

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias

    2017-07-01

    Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.

  1. Studies in High Current Density Ion Sources for Heavy Ion Fusion Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacon-Golcher, Edwin

    This dissertation develops diverse research on small (diameter ~ few mm), high current density (J ~ several tens of mA/cm 2) heavy ion sources. The research has been developed in the context of a programmatic interest within the Heavy Ion Fusion (HIF) Program to explore alternative architectures in the beam injection systems that use the merging of small, bright beams. An ion gun was designed and built for these experiments. Results of average current density yield () at different operating conditions are presented for K + and Cs + contact ionization sources and potassium aluminum silicate sources. Maximum valuesmore » for a K + beam of ~90 mA/cm 2 were observed in 2.3 μs pulses. Measurements of beam intensity profiles and emittances are included. Measurements of neutral particle desorption are presented at different operating conditions which lead to a better understanding of the underlying atomic diffusion processes that determine the lifetime of the emitter. Estimates of diffusion times consistent with measurements are presented, as well as estimates of maximum repetition rates achievable. Diverse studies performed on the composition and preparation of alkali aluminosilicate ion sources are also presented. In addition, this work includes preliminary work carried out exploring the viability of an argon plasma ion source and a bismuth metal vapor vacuum arc (MEVVA) ion source. For the former ion source, fast rise-times (~ 1 μs), high current densities (~ 100 mA/cm +) and low operating pressures (< 2 mtorr) were verified. For the latter, high but acceptable levels of beam emittance were measured (ε n ≤ 0.006 π· mm · mrad) although measured currents differed from the desired ones (I ~ 5mA) by about a factor of 10.« less

  2. Comparing methods to estimate Reineke’s maximum size-density relationship species boundary line slope

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2010-01-01

    Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...

  3. Evaluation of line transect sampling based on remotely sensed data from underwater video

    USGS Publications Warehouse

    Bergstedt, R.A.; Anderson, D.R.

    1990-01-01

    We used underwater video in conjunction with the line transect method and a Fourier series estimator to make 13 independent estimates of the density of known populations of bricks lying on the bottom in shallows of Lake Huron. The pooled estimate of density (95.5 bricks per hectare) was close to the true density (89.8 per hectare), and there was no evidence of bias. Confidence intervals for the individual estimates included the true density 85% of the time instead of the nominal 95%. Our results suggest that reliable estimates of the density of objects on a lake bed can be obtained by the use of remote sensing and line transect sampling theory.

  4. Toward accurate and precise estimates of lion density.

    PubMed

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and policy decisions. © 2016 Society for Conservation Biology.

  5. Northern elephant seals adjust gliding and stroking patterns with changes in buoyancy: validation of at-sea metrics of body density.

    PubMed

    Aoki, Kagari; Watanabe, Yuuki Y; Crocker, Daniel E; Robinson, Patrick W; Biuw, Martin; Costa, Daniel P; Miyazaki, Nobuyuki; Fedak, Mike A; Miller, Patrick J O

    2011-09-01

    Many diving animals undergo substantial changes in their body density that are the result of changes in lipid content over their annual fasting cycle. Because the size of the lipid stores reflects an integration of foraging effort (energy expenditure) and foraging success (energy assimilation), measuring body density is a good way to track net resource acquisition of free-ranging animals while at sea. Here, we experimentally altered the body density and mass of three free-ranging elephant seals by remotely detaching weights and floats while monitoring their swimming speed, depth and three-axis acceleration with a high-resolution data logger. Cross-validation of three methods for estimating body density from hydrodynamic gliding performance of freely diving animals showed strong positive correlation with body density estimates obtained from isotope dilution body composition analysis over density ranges of 1015 to 1060 kg m(-3). All three hydrodynamic models were within 1% of, but slightly greater than, body density measurements determined by isotope dilution, and therefore have the potential to track changes in body condition of a wide range of freely diving animals. Gliding during ascent and descent clearly increased and stroke rate decreased when buoyancy manipulations aided the direction of vertical transit, but ascent and descent speed were largely unchanged. The seals adjusted stroking intensity to maintain swim speed within a narrow range, despite changes in buoyancy. During active swimming, all three seals increased the amplitude of lateral body accelerations and two of the seals altered stroke frequency in response to the need to produce thrust required to overcome combined drag and buoyancy forces.

  6. Nonlinear system theory: Another look at dependence

    PubMed Central

    Wu, Wei Biao

    2005-01-01

    Based on the nonlinear system theory, we introduce previously undescribed dependence measures for stationary causal processes. Our physical and predictive dependence measures quantify the degree of dependence of outputs on inputs in physical systems. The proposed dependence measures provide a natural framework for a limit theory for stationary processes. In particular, under conditions with quite simple forms, we present limit theorems for partial sums, empirical processes, and kernel density estimates. The conditions are mild and easily verifiable because they are directly related to the data-generating mechanisms. PMID:16179388

  7. The Impact of Back-Sputtered Carbon on the Accelerator Grid Wear Rates of the NEXT and NSTAR Ion Thrusters

    NASA Technical Reports Server (NTRS)

    Soulas, George C.

    2013-01-01

    A study was conducted to quantify the impact of back-sputtered carbon on the downstream accelerator grid erosion rates of the NASA's Evolutionary Xenon Thruster (NEXT) Long Duration Test (LDT1). A similar analysis that was conducted for the NASA's Solar Electric Propulsion Technology Applications Readiness Program (NSTAR) Life Demonstration Test (LDT2) was used as a foundation for the analysis developed herein. A new carbon surface coverage model was developed that accounted for multiple carbon adlayers before complete surface coverage is achieved. The resulting model requires knowledge of more model inputs, so they were conservatively estimated using the results of past thin film sputtering studies and particle reflection predictions. In addition, accelerator current densities across the grid were rigorously determined using an ion optics code to determine accelerator current distributions and an algorithm to determine beam current densities along a grid using downstream measurements. The improved analysis was applied to the NSTAR test results for evaluation. The improved analysis demonstrated that the impact of back-sputtered carbon on pit and groove wear rate for the NSTAR LDT2 was negligible throughout most of eroded grid radius. The improved analysis also predicted the accelerator current density for transition from net erosion to net deposition considerably more accurately than the original analysis. The improved analysis was used to estimate the impact of back-sputtered carbon on the accelerator grid pit and groove wear rate of the NEXT Long Duration Test (LDT1). Unlike the NSTAR analysis, the NEXT analysis was more challenging because the thruster was operated for extended durations at various operating conditions and was unavailable for measurements because the test is ongoing. As a result, the NEXT LDT1 estimates presented herein are considered preliminary until the results of future post-test analyses are incorporated. The worst-case impact of carbon back-sputtering was determined to be the full power operating condition, but the maximum impact of back-sputtered carbon was only a 4 percent reduction in wear rate. As a result, back-sputtered carbon is estimated to have an insignificant impact on the first failure mode of the NEXT LDT1 at all operating conditions.

  8. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  9. Challenges in devising economic spray thresholds for a major pest of Australian canola, the redlegged earth mite (Halotydeus destructor).

    PubMed

    Arthur, Aston L; Hoffmann, Ary A; Umina, Paul A

    2015-10-01

    A key component for spray decision-making in IPM programmes is the establishment of economic injury levels (EILs) and economic thresholds (ETs). We aimed to establish an EIL for the redlegged earth mite (Halotydeus destructor Tucker) on canola. Complex interactions between mite numbers, feeding damage and plant recovery were found, highlighting the challenges in linking H. destructor numbers to yield. A guide of 10 mites plant(-1) was established at the first-true-leaf stage; however, simple relationships were not evident at other crop development stages, making it difficult to establish reliable EILs based on mite number. Yield was, however, strongly associated with plant damage and plant densities, reflecting the impact of mite feeding damage and indicating a plant-based alternative for establishing thresholds for H. destructor. Drawing on data from multiple field trials, we show that plant densities below 30-40 plants m(-2) could be used as a proxy for mite damage when reliable estimates of mite densities are not possible. This plant-based threshold provides a practical tool that avoids the difficulties of accurately estimating mite densities. The approach may be applicable to other situations where production conditions are unpredictable and interactions between pests and plant hosts are complex. © 2015 Society of Chemical Industry.

  10. Leaf-on canopy closure in broadleaf deciduous forests predicted during winter

    USGS Publications Warehouse

    Twedt, Daniel J.; Ayala, Andrea J.; Shickel, Madeline R.

    2015-01-01

    Forest canopy influences light transmittance, which in turn affects tree regeneration and survival, thereby having an impact on forest composition and habitat conditions for wildlife. Because leaf area is the primary impediment to light penetration, quantitative estimates of canopy closure are normally made during summer. Studies of forest structure and wildlife habitat that occur during winter, when deciduous trees have shed their leaves, may inaccurately estimate canopy closure. We estimated percent canopy closure during both summer (leaf-on) and winter (leaf-off) in broadleaf deciduous forests in Mississippi and Louisiana using gap light analysis of hemispherical photographs that were obtained during repeat visits to the same locations within bottomland and mesic upland hardwood forests and hardwood plantation forests. We used mixed-model linear regression to predict leaf-on canopy closure from measurements of leaf-off canopy closure, basal area, stem density, and tree height. Competing predictive models all included leaf-off canopy closure (relative importance = 0.93), whereas basal area and stem density, more traditional predictors of canopy closure, had relative model importance of ≤ 0.51.

  11. Dimensional Analysis on Forest Fuel Bed Fire Spread.

    PubMed

    Yang, Jiann C

    2018-01-01

    A dimensional analysis was performed to correlate the fuel bed fire rate of spread data previously reported in the literature. Under wind condition, six pertinent dimensionless groups were identified, namely dimensionless fire spread rate, dimensionless fuel particle size, fuel moisture content, dimensionless fuel bed depth or dimensionless fuel loading density, dimensionless wind speed, and angle of inclination of fuel bed. Under no-wind condition, five similar dimensionless groups resulted. Given the uncertainties associated with some of the parameters used to estimate the dimensionless groups, the dimensionless correlations using the resulting dimensionless groups correlate the fire rates of spread reasonably well under wind and no-wind conditions.

  12. Influence of the volume and density functions within geometric models for estimating trunk inertial parameters.

    PubMed

    Wicke, Jason; Dumas, Genevieve A

    2010-02-01

    The geometric method combines a volume and a density function to estimate body segment parameters and has the best opportunity for developing the most accurate models. In the trunk, there are many different tissues that greatly differ in density (e.g., bone versus lung). Thus, the density function for the trunk must be particularly sensitive to capture this diversity, such that accurate inertial estimates are possible. Three different models were used to test this hypothesis by estimating trunk inertial parameters of 25 female and 24 male college-aged participants. The outcome of this study indicates that the inertial estimates for the upper and lower trunk are most sensitive to the volume function and not very sensitive to the density function. Although it appears that the uniform density function has a greater influence on inertial estimates in the lower trunk region than in the upper trunk region, this is likely due to the (overestimated) density value used. When geometric models are used to estimate body segment parameters, care must be taken in choosing a model that can accurately estimate segment volumes. Researchers wanting to develop accurate geometric models should focus on the volume function, especially in unique populations (e.g., pregnant or obese individuals).

  13. Using Passive Sensing to Estimate Relative Energy Expenditure for Eldercare Monitoring

    PubMed Central

    2012-01-01

    This paper describes ongoing work in analyzing sensor data logged in the homes of seniors. An estimation of relative energy expenditure is computed using motion density from passive infrared motion sensors mounted in the environment. We introduce a new algorithm for detecting visitors in the home using motion sensor data and a set of fuzzy rules. The visitor algorithm, as well as a previous algorithm for identifying time-away-from-home (TAFH), are used to filter the logged motion sensor data. Thus, the energy expenditure estimate uses data collected only when the resident is home alone. Case studies are included from TigerPlace, an Aging in Place community, to illustrate how the relative energy expenditure estimate can be used to track health conditions over time. PMID:25266777

  14. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  15. Choosing a DIVA: a comparison of emerging digital imagery vegetation analysis techniques

    USGS Publications Warehouse

    Jorgensen, Christopher F.; Stutzman, Ryan J.; Anderson, Lars C.; Decker, Suzanne E.; Powell, Larkin A.; Schacht, Walter H.; Fontaine, Joseph J.

    2013-01-01

    Question: What is the precision of five methods of measuring vegetation structure using ground-based digital imagery and processing techniques? Location: Lincoln, Nebraska, USA Methods: Vertical herbaceous cover was recorded using digital imagery techniques at two distinct locations in a mixed-grass prairie. The precision of five ground-based digital imagery vegetation analysis (DIVA) methods for measuring vegetation structure was tested using a split-split plot analysis of covariance. Variability within each DIVA technique was estimated using coefficient of variation of mean percentage cover. Results: Vertical herbaceous cover estimates differed among DIVA techniques. Additionally, environmental conditions affected the vertical vegetation obstruction estimates for certain digital imagery methods, while other techniques were more adept at handling various conditions. Overall, percentage vegetation cover values differed among techniques, but the precision of four of the five techniques was consistently high. Conclusions: DIVA procedures are sufficient for measuring various heights and densities of standing herbaceous cover. Moreover, digital imagery techniques can reduce measurement error associated with multiple observers' standing herbaceous cover estimates, allowing greater opportunity to detect patterns associated with vegetation structure.

  16. On the use of the noncentral chi-square density function for the distribution of helicopter spectral estimates

    NASA Technical Reports Server (NTRS)

    Garber, Donald P.

    1993-01-01

    A probability density function for the variability of ensemble averaged spectral estimates from helicopter acoustic signals in Gaussian background noise was evaluated. Numerical methods for calculating the density function and for determining confidence limits were explored. Density functions were predicted for both synthesized and experimental data and compared with observed spectral estimate variability.

  17. Curve fitting of the corporate recovery rates: the comparison of Beta distribution estimation and kernel density estimation.

    PubMed

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio's loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody's. However, it has a fatal defect that it can't fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody's new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management.

  18. Curve Fitting of the Corporate Recovery Rates: The Comparison of Beta Distribution Estimation and Kernel Density Estimation

    PubMed Central

    Chen, Rongda; Wang, Ze

    2013-01-01

    Recovery rate is essential to the estimation of the portfolio’s loss and economic capital. Neglecting the randomness of the distribution of recovery rate may underestimate the risk. The study introduces two kinds of models of distribution, Beta distribution estimation and kernel density distribution estimation, to simulate the distribution of recovery rates of corporate loans and bonds. As is known, models based on Beta distribution are common in daily usage, such as CreditMetrics by J.P. Morgan, Portfolio Manager by KMV and Losscalc by Moody’s. However, it has a fatal defect that it can’t fit the bimodal or multimodal distributions such as recovery rates of corporate loans and bonds as Moody’s new data show. In order to overcome this flaw, the kernel density estimation is introduced and we compare the simulation results by histogram, Beta distribution estimation and kernel density estimation to reach the conclusion that the Gaussian kernel density distribution really better imitates the distribution of the bimodal or multimodal data samples of corporate loans and bonds. Finally, a Chi-square test of the Gaussian kernel density estimation proves that it can fit the curve of recovery rates of loans and bonds. So using the kernel density distribution to precisely delineate the bimodal recovery rates of bonds is optimal in credit risk management. PMID:23874558

  19. Application of terahertz pulse imaging as PAT tool for non-destructive evaluation of film-coated tablets under different manufacturing conditions.

    PubMed

    Dohi, Masafumi; Momose, Wataru; Yoshino, Hiroyuki; Hara, Yuko; Yamashita, Kazunari; Hakomori, Tadashi; Sato, Shusaku; Terada, Katsuhide

    2016-02-05

    Film-coated tablets (FCTs) are a popular solid dosage form in pharmaceutical industry. Manufacturing conditions during the film-coating process affect the properties of the film layer, which might result in critical quality problems. Here, we analyzed the properties of the film layer using a non-destructive approach with terahertz pulsed imaging (TPI). Hydrophilic tablets that become distended upon water absorption were used as core tablets and coated with film under different manufacturing conditions. TPI-derived parameters such as film thickness (FT), film surface reflectance (FSR), and interface density difference (IDD) between the film layer and core tablet were affected by manufacturing conditions and influenced critical quality attributes of FCTs. Relative standard deviation of FSR within tablets correlated well with surface roughness. Tensile strength could be predicted in a non-destructive manner using the multivariate regression equation to estimate the core tablet density by film layer density and IDD. The absolute value of IDD (Lateral) correlated with the risk of cracking on the lateral film layer when stored in a high-humidity environment. Further, in-process control was proposed for this value during the film-coating process, which will enable a feedback control system to be applied to process parameters and reduced risk of cracking without a stability test. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Worldwide organic soil carbon and nitrogen data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zinke, P.J.; Stangenberger, A.G.; Post, W.M.

    The objective of the research presented in this package was to identify data that could be used to estimate the size of the soil organic carbon pool under relatively undisturbed soil conditions. A subset of the data can be used to estimate amounts of soil carbon storage at equilibrium with natural soil-forming factors. The magnitude of soil properties so defined is a resulting nonequilibrium values for carbon storage. Variation in these values is due to differences in local and geographic soil-forming factors. Therefore, information is included on location, soil nitrogen content, climate, and vegetation along with carbon density and variation.

  1. Evaluation of trapping-web designs

    USGS Publications Warehouse

    Lukacs, P.M.; Anderson, D.R.; Burnham, K.P.

    2005-01-01

    The trapping web is a method for estimating the density and abundance of animal populations. A Monte Carlo simulation study is performed to explore performance of the trapping web for estimating animal density under a variety of web designs and animal behaviours. The trapping performs well when animals have home ranges, even if the home ranges are large relative to trap spacing. Webs should contain at least 90 traps. Trapping should continue for 5-7 occasions. Movement rates have little impact on density estimates when animals are confined to home ranges. Estimation is poor when animals do not have home ranges and movement rates are rapid. The trapping web is useful for estimating the density of animals that are hard to detect and occur at potentially low densities. ?? CSIRO 2005.

  2. Estimates of crystalline LiF thermal conductivity at high temperature and pressure by a Green-Kubo method

    DOE PAGES

    Jones, R. E.; Ward, D. K.

    2016-07-18

    Here, given the unique optical properties of LiF, it is often used as an observation window in high-temperature and -pressure experiments; hence, estimates of its transmission properties are necessary to interpret observations. Since direct measurements of the thermal conductivity of LiF at the appropriate conditions are difficult, we resort to molecular simulation methods. Using an empirical potential validated against ab initio phonon density of states, we estimate the thermal conductivity of LiF at high temperatures (1000–4000 K) and pressures (100–400 GPa) with the Green-Kubo method. We also compare these estimates to those derived directly from ab initio data. To ascertainmore » the correct phase of LiF at these extreme conditions, we calculate the (relative) phase stability of the B1 and B2 structures using a quasiharmonic ab initio model of the free energy. We also estimate the thermal conductivity of LiF in an uniaxial loading state that emulates initial stages of compression in high-stress ramp loading experiments and show the degree of anisotropy induced in the conductivity due to deformation.« less

  3. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    PubMed

    Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.

  4. The changing contribution of top-down and bottom-up limitation of mesopredators during 220 years of land use and climate change.

    PubMed

    Pasanen-Mortensen, Marianne; Elmhagen, Bodil; Lindén, Harto; Bergström, Roger; Wallgren, Märtha; van der Velde, Ype; Cousins, Sara A O

    2017-05-01

    Apex predators may buffer bottom-up driven ecosystem change, as top-down suppression may dampen herbivore and mesopredator responses to increased resource availability. However, theory suggests that for this buffering capacity to be realized, the equilibrium abundance of apex predators must increase. This raises the question: will apex predators maintain herbivore/mesopredator limitation, if bottom-up change relaxes resource constraints? Here, we explore changes in mesopredator (red fox Vulpes vulpes) abundance over 220 years in response to eradication and recovery of an apex predator (Eurasian lynx Lynx lynx), and changes in land use and climate which are linked to resource availability. A three-step approach was used. First, recent data from Finland and Sweden were modelled to estimate linear effects of lynx density, land use and winter temperature on fox density. Second, lynx density, land use and winter temperature was estimated in a 22 650 km 2 focal area in boreal and boreo-nemoral Sweden in the years 1830, 1920, 2010 and 2050. Third, the models and estimates were used to project historic and future fox densities in the focal area. Projected fox density was lowest in 1830 when lynx density was high, winters cold and the proportion of cropland low. Fox density peaked in 1920 due to lynx eradication, a mesopredator release boosted by favourable bottom-up changes - milder winters and cropland expansion. By 2010, lynx recolonization had reduced fox density, but it remained higher than in 1830, partly due to the bottom-up changes. Comparing 1830 to 2010, the contribution of top-down limitation decreased, while environment enrichment relaxed bottom-up limitation. Future scenarios indicated that by 2050, lynx density would have to increase by 79% to compensate for a projected climate-driven increase in fox density. We highlight that although top-down limitation in theory can buffer bottom-up change, this requires compensatory changes in apex predator abundance. Hence apex predator recolonization/recovery to historical levels would not be sufficient to compensate for widespread changes in climate and land use, which have relaxed the resource constraints for many herbivores and mesopredators. Variation in bottom-up conditions may also contribute to context dependence in apex predator effects. © 2017 The Authors. Journal of Animal Ecology © 2017 British Ecological Society.

  5. Modeling the effects of vegetation heterogeneity on wildland fire behavior

    NASA Astrophysics Data System (ADS)

    Atchley, A. L.; Linn, R.; Sieg, C.; Middleton, R. S.

    2017-12-01

    Vegetation structure and densities are known to drive fire-spread rate and burn severity. Many fire-spread models incorporate an average, homogenous fuel density in the model domain to drive fire behavior. However, vegetation communities are rarely homogenous and instead present significant heterogeneous structure and fuel densities in the fires path. This results in observed patches of varied burn severities and mosaics of disturbed conditions that affect ecological recovery and hydrologic response. Consequently, to understand the interactions of fire and ecosystem functions, representations of spatially heterogeneous conditions need to be incorporated into fire models. Mechanistic models of fire disturbance offer insight into how fuel load characterization and distribution result in varied fire behavior. Here we use a physically-based 3D combustion model—FIRETEC—that solves conservation of mass, momentum, energy, and chemical species to compare fire behavior on homogenous representations to a heterogeneous vegetation distribution. Results demonstrate the impact vegetation heterogeneity has on the spread rate, intensity, and extent of simulated wildfires thus providing valuable insight in predicted wildland fire evolution and enhanced ability to estimate wildland fire inputs into regional and global climate models.

  6. Snowmobile impacts on snowpack physical and mechanical properties

    NASA Astrophysics Data System (ADS)

    Fassnacht, Steven R.; Heath, Jared T.; Venable, Niah B. H.; Elder, Kelly J.

    2018-03-01

    Snowmobile use is a popular form of winter recreation in Colorado, particularly on public lands. To examine the effects of differing levels of use on snowpack properties, experiments were performed at two different areas, Rabbit Ears Pass near Steamboat Springs and at Fraser Experimental Forest near Fraser, Colorado USA. Differences between no use and varying degrees of snowmobile use (low, medium and high) on shallow (the operational standard of 30 cm) and deeper snowpacks (120 cm) were quantified and statistically assessed using measurements of snow density, temperature, stratigraphy, hardness, and ram resistance from snow pit profiles. A simple model was explored that estimated snow density changes from snowmobile use based on experimental results. Snowpack property changes were more pronounced for thinner snow accumulations. When snowmobile use started in deeper snow conditions, there was less difference in density, hardness, and ram resistance compared to the control case of no snowmobile use. These results have implications for the management of snowmobile use in times and places of shallower snow conditions where underlying natural resources could be affected by denser and harder snowpacks.

  7. Electric field measurement in the dielectric tube of helium atmospheric pressure plasma jet

    NASA Astrophysics Data System (ADS)

    Sretenović, Goran B.; Guaitella, Olivier; Sobota, Ana; Krstić, Ivan B.; Kovačević, Vesna V.; Obradović, Bratislav M.; Kuraica, Milorad M.

    2017-03-01

    The results of the electric field measurements in the capillary of the helium plasma jet are presented in this article. Distributions of the electric field for the streamers are determined for different gas flow rates. It is found that electric field strength in front of the ionization wave decreases as it approaches to the exit of the tube. The values obtained under presented experimental conditions are in the range of 5-11 kV/cm. It was found that the increase in gas flow above 1500 SCCM could induce substantial changes in the discharge operation. This is reflected through the formation of the brighter discharge region and appearance of the electric field maxima. Furthermore, using the measured values of the electric field strength in the streamer head, it was possible to estimate electron densities in the streamer channel. Maximal density of 4 × 1011 cm-3 is obtained in the vicinity of the grounded ring electrode. Similar behaviors of the electron density distributions to the distributions of the electric field strength are found under the studied experimental conditions.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel S. Clark; Nathaniel J. Fisch

    A critical issue in the generation of ultra-intense, ultra-short laser pulses by backward Raman scattering in plasma is the stability of the pumping pulse to premature backscatter from thermal fluctuations in the preformed plasma. Malkin et al. [V.M. Malkin, et al., Phys. Rev. Lett. 84 (6):1208-1211, 2000] demonstrated that density gradients may be used to detune the Raman resonance in such a way that backscatter of the pump from thermal noise can be stabilized while useful Raman amplification persists. Here plasma conditions for which the pump is stable to thermal Raman backscatter in a homogeneous plasma and the density gradientsmore » necessary to stabilize the pump for other plasma conditions are quantified. Other ancillary constraints on a Raman amplifier are also considered to determine a specific region in the Te-he plane where Raman amplification is feasible. By determining an operability region, the degree of uncertainty in density or temperature tolerable for an experimental Raman amplifier is thus also identified. The fluid code F3D, which includes the effects of thermal fluctuations, is used to verify these analytic estimates.« less

  9. Tropical insular fish assemblages are resilient to flood disturbance

    USGS Publications Warehouse

    Smith, William E.; Kwak, Thomas J.

    2015-01-01

    Periods of stable environmental conditions, favoring development of ecological communities regulated by density-dependent processes, are interrupted by random periods of disturbance that may restructure communities. Disturbance may affect populations via habitat alteration, mortality, or displacement. We quantified fish habitat conditions, density, and movement before and after a major flood disturbance in a Caribbean island tropical river using habitat surveys, fish sampling and population estimates, radio telemetry, and passively monitored PIT tags. Native stream fish populations showed evidence of acute mortality and downstream displacement of surviving fish. All fish species were reduced in number at most life stages after the disturbance, but populations responded with recruitment and migration into vacated upstream habitats. Changes in density were uneven among size classes for most species, indicating altered size structures. Rapid recovery processes at the population level appeared to dampen effects at the assemblage level, as fish assemblage parameters (species richness and diversity) were unchanged by the flooding. The native fish assemblage appeared resilient to flood disturbance, rapidly compensating for mortality and displacement with increased recruitment and recolonization of upstream habitats.

  10. Field evaluation of effect of temperature on release of Disparlure from a pheromone-baited trapping system used to monitor gypsy moth (Lepidoptera: Lymantriidae)

    Treesearch

    Patrick C. Tobin; Aijun Zhang; Ksenia Onufrieva; Donna Leonard

    2011-01-01

    Traps baited with disparlure, the synthetic form of the gypsy moth, Lymantria dispar (L.) (Lepidoptera: Lymantriidae), sex pheromone are used to detect newly founded populations and estimate population density across the United States. The lures used in trapping devices are exposed to field conditions with varying climates, which can affect the rate...

  11. Comparison of Drive Counts and Mark-Resight As Methods of Population Size Estimation of Highly Dense Sika Deer (Cervus nippon) Populations

    PubMed Central

    Takeshita, Kazutaka; Yoshida, Tsuyoshi; Igota, Hiromasa; Matsuura, Yukiko

    2016-01-01

    Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR) methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2) on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered. PMID:27711181

  12. Trapped Inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Daniel; Horn, Bart; /SLAC /Stanford U., Phys. Dept.

    2009-06-19

    We analyze a distinctive mechanism for inflation in which particle production slows down a scalar field on a steep potential, and show how it descends from angular moduli in string compactifications. The analysis of density perturbations - taking into account the integrated effect of the produced particles and their quantum fluctuations - requires somewhat new techniques that we develop. We then determine the conditions for this effect to produce sixty e-foldings of inflation with the correct amplitude of density perturbations at the Gaussian level, and show that these requirements can be straightforwardly satisfied. Finally, we estimate the amplitude of themore » non-Gaussianity in the power spectrum and find a significant equilateral contribution.« less

  13. Density estimates of monarch butterflies overwintering in central Mexico

    PubMed Central

    Diffendorfer, Jay E.; López-Hoffman, Laura; Oberhauser, Karen; Pleasants, John; Semmens, Brice X.; Semmens, Darius; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations. PMID:28462031

  14. Density estimates of monarch butterflies overwintering in central Mexico

    USGS Publications Warehouse

    Thogmartin, Wayne E.; Diffendorfer, James E.; Lopez-Hoffman, Laura; Oberhauser, Karen; Pleasants, John M.; Semmens, Brice X.; Semmens, Darius J.; Taylor, Orley R.; Wiederholt, Ruscena

    2017-01-01

    Given the rapid population decline and recent petition for listing of the monarch butterfly (Danaus plexippus L.) under the Endangered Species Act, an accurate estimate of the Eastern, migratory population size is needed. Because of difficulty in counting individual monarchs, the number of hectares occupied by monarchs in the overwintering area is commonly used as a proxy for population size, which is then multiplied by the density of individuals per hectare to estimate population size. There is, however, considerable variation in published estimates of overwintering density, ranging from 6.9–60.9 million ha−1. We develop a probability distribution for overwinter density of monarch butterflies from six published density estimates. The mean density among the mixture of the six published estimates was ∼27.9 million butterflies ha−1 (95% CI [2.4–80.7] million ha−1); the mixture distribution is approximately log-normal, and as such is better represented by the median (21.1 million butterflies ha−1). Based upon assumptions regarding the number of milkweed needed to support monarchs, the amount of milkweed (Asclepias spp.) lost (0.86 billion stems) in the northern US plus the amount of milkweed remaining (1.34 billion stems), we estimate >1.8 billion stems is needed to return monarchs to an average population size of 6 ha. Considerable uncertainty exists in this required amount of milkweed because of the considerable uncertainty occurring in overwinter density estimates. Nevertheless, the estimate is on the same order as other published estimates. The studies included in our synthesis differ substantially by year, location, method, and measures of precision. A better understanding of the factors influencing overwintering density across space and time would be valuable for increasing the precision of conservation recommendations.

  15. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  16. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect

    PubMed Central

    Ku, Bon Ki; Evans, Douglas E.

    2015-01-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as “Maynard’s estimation method”) is used. Therefore, it is necessary to quantitatively investigate how much the Maynard’s estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard’s estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard’s estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard’s estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles. PMID:26526560

  17. Investigation of Aerosol Surface Area Estimation from Number and Mass Concentration Measurements: Particle Density Effect.

    PubMed

    Ku, Bon Ki; Evans, Douglas E

    2012-04-01

    For nanoparticles with nonspherical morphologies, e.g., open agglomerates or fibrous particles, it is expected that the actual density of agglomerates may be significantly different from the bulk material density. It is further expected that using the material density may upset the relationship between surface area and mass when a method for estimating aerosol surface area from number and mass concentrations (referred to as "Maynard's estimation method") is used. Therefore, it is necessary to quantitatively investigate how much the Maynard's estimation method depends on particle morphology and density. In this study, aerosol surface area estimated from number and mass concentration measurements was evaluated and compared with values from two reference methods: a method proposed by Lall and Friedlander for agglomerates and a mobility based method for compact nonspherical particles using well-defined polydisperse aerosols with known particle densities. Polydisperse silver aerosol particles were generated by an aerosol generation facility. Generated aerosols had a range of morphologies, count median diameters (CMD) between 25 and 50 nm, and geometric standard deviations (GSD) between 1.5 and 1.8. The surface area estimates from number and mass concentration measurements correlated well with the two reference values when gravimetric mass was used. The aerosol surface area estimates from the Maynard's estimation method were comparable to the reference method for all particle morphologies within the surface area ratios of 3.31 and 0.19 for assumed GSDs 1.5 and 1.8, respectively, when the bulk material density of silver was used. The difference between the Maynard's estimation method and surface area measured by the reference method for fractal-like agglomerates decreased from 79% to 23% when the measured effective particle density was used, while the difference for nearly spherical particles decreased from 30% to 24%. The results indicate that the use of particle density of agglomerates improves the accuracy of the Maynard's estimation method and that an effective density should be taken into account, when known, when estimating aerosol surface area of nonspherical aerosol such as open agglomerates and fibrous particles.

  18. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  19. Investigation of estimators of probability density functions

    NASA Technical Reports Server (NTRS)

    Speed, F. M.

    1972-01-01

    Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.

  20. Bayesian inference based on stationary Fokker-Planck sampling.

    PubMed

    Berrones, Arturo

    2010-06-01

    A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.

  1. Insights into Spray Development from Metered-Dose Inhalers Through Quantitative X-ray Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason-Smith, Nicholas; Duke, Daniel J.; Kastengren, Alan L.

    Typical methods to study pMDI sprays employ particle sizing or visible light diagnostics, which suffer in regions of high spray density. X-ray techniques can be applied to pharmaceutical sprays to obtain information unattainable by conventional particle sizing and light-based techniques. We present a technique for obtaining quantitative measurements of spray density in pMDI sprays. A monochromatic focused X-ray beam was used to perform quantitative radiography measurements in the near-nozzle region and plume of HFA-propelled sprays. Measurements were obtained with a temporal resolution of 0.184 ms and spatial resolution of 5 mu m. Steady flow conditions were reached after around 30more » ms for the formulations examined with the spray device used. Spray evolution was affected by the inclusion of ethanol in the formulation and unaffected by the inclusion of 0.1% drug by weight. Estimation of the nozzle exit density showed that vapour is likely to dominate the flow leaving the inhaler nozzle during steady flow. Quantitative measurements in pMDI sprays allow the determination of nozzle exit conditions that are difficult to obtain experimentally by other means. Measurements of these nozzle exit conditions can improve understanding of the atomization mechanisms responsible for pMDI spray droplet and particle formation.« less

  2. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites: SURROGATE-BASED MCMC FOR CLM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    2016-07-04

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically-average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  3. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE PAGES

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; ...

    2016-06-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  4. On the applicability of surrogate-based MCMC-Bayesian inversion to the Community Land Model: Case studies at Flux tower sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesianmore » model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. As a result, analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.« less

  5. On the applicability of surrogate-based Markov chain Monte Carlo-Bayesian inversion to the Community Land Model: Case studies at flux tower sites

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Liu, Ying; Swiler, Laura

    2016-07-01

    The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.

  6. Extrathermodynamic interpretation of retention equilibria in reversed-phase liquid chromatography using octadecylsilyl-silica gels bonded to C1 and C18 ligands of different densities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyabe, Kanji; Guiochon, Georges A

    2005-09-01

    The retention behavior on silica gels bonded to C{sub 18} and C{sub 1} alkyl ligands of different densities was studied in reversed-phase liquid chromatography (RPLC) from the viewpoints of two extrathermodynamic relationships, enthalpy-entropy compensation (EEC) and linear free energy relationship (LFER). First, the four tests proposed by Krug et al. were applied to the values of the retention equilibrium constants (K) normalized by the alkyl ligand density. These tests showed that a real EEC of the retention equilibrium originates from substantial physico-chemical effects. Second, we derived a new model based on the EEC to explain the LFER between the retentionmore » equilibria under different RPLC conditions. The new model indicates how the slope and intercept of the LFER are correlated to the compensation temperatures derived from the EEC analyses and to several parameters characterizing the molecular contributions to the changes in enthalpy and entropy. Finally, we calculated K under various RPLC conditions from only one original experimental K datum by assuming that the contributions of the C{sub 18} and C{sub 1} ligands to K are additive and that their contributions are proportional to the density of each ligand. The estimated K values are in agreement with the corresponding experimental data, demonstrating that our model is useful to explain the variations of K due to changes in the RPLC conditions.« less

  7. Monitoring landscape metrics by point sampling: accuracy in estimating Shannon's diversity and edge density.

    PubMed

    Ramezani, Habib; Holm, Sören; Allard, Anna; Ståhl, Göran

    2010-05-01

    Environmental monitoring of landscapes is of increasing interest. To quantify landscape patterns, a number of metrics are used, of which Shannon's diversity, edge length, and density are studied here. As an alternative to complete mapping, point sampling was applied to estimate the metrics for already mapped landscapes selected from the National Inventory of Landscapes in Sweden (NILS). Monte-Carlo simulation was applied to study the performance of different designs. Random and systematic samplings were applied for four sample sizes and five buffer widths. The latter feature was relevant for edge length, since length was estimated through the number of points falling in buffer areas around edges. In addition, two landscape complexities were tested by applying two classification schemes with seven or 20 land cover classes to the NILS data. As expected, the root mean square error (RMSE) of the estimators decreased with increasing sample size. The estimators of both metrics were slightly biased, but the bias of Shannon's diversity estimator was shown to decrease when sample size increased. In the edge length case, an increasing buffer width resulted in larger bias due to the increased impact of boundary conditions; this effect was shown to be independent of sample size. However, we also developed adjusted estimators that eliminate the bias of the edge length estimator. The rates of decrease of RMSE with increasing sample size and buffer width were quantified by a regression model. Finally, indicative cost-accuracy relationships were derived showing that point sampling could be a competitive alternative to complete wall-to-wall mapping.

  8. Characterization of a maximum-likelihood nonparametric density estimator of kernel type

    NASA Technical Reports Server (NTRS)

    Geman, S.; Mcclure, D. E.

    1982-01-01

    Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

  9. Local breast density assessment using reacquired mammographic images.

    PubMed

    García, Eloy; Diaz, Oliver; Martí, Robert; Diez, Yago; Gubern-Mérida, Albert; Sentís, Melcior; Martí, Joan; Oliver, Arnau

    2017-08-01

    The aim of this paper is to evaluate the spatial glandular volumetric tissue distribution as well as the density measures provided by Volpara™ using a dataset composed of repeated pairs of mammograms, where each pair was acquired in a short time frame and in a slightly changed position of the breast. We conducted a retrospective analysis of 99 pairs of repeatedly acquired full-field digital mammograms from 99 different patients. The commercial software Volpara™ Density Maps (Volpara Solutions, Wellington, New Zealand) is used to estimate both the global and the local glandular tissue distribution in each image. The global measures provided by Volpara™, such as breast volume, volume of glandular tissue, and volumetric breast density are compared between the two acquisitions. The evaluation of the local glandular information is performed using histogram similarity metrics, such as intersection and correlation, and local measures, such as statistics from the difference image and local gradient correlation measures. Global measures showed a high correlation (breast volume R=0.99, volume of glandular tissue R=0.94, and volumetric breast density R=0.96) regardless the anode/filter material. Similarly, histogram intersection and correlation metric showed that, for each pair, the images share a high degree of information. Regarding the local distribution of glandular tissue, small changes in the angle of view do not yield significant differences in the glandular pattern, whilst changes in the breast thickness between both acquisition affect the spatial parenchymal distribution. This study indicates that Volpara™ Density Maps is reliable in estimating the local glandular tissue distribution and can be used for its assessment and follow-up. Volpara™ Density Maps is robust to small variations of the acquisition angle and to the beam energy, although divergences arise due to different breast compression conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Traffic evacuation time under nonhomogeneous conditions.

    PubMed

    Fazio, Joseph; Shetkar, Rohan; Mathew, Tom V

    2017-06-01

    During many manmade and natural crises such as terrorist threats, floods, hazardous chemical and gas leaks, emergency personnel need to estimate the time in which people can evacuate from the affected urban area. Knowing an estimated evacuation time for a given crisis, emergency personnel can plan and prepare accordingly with the understanding that the actual evacuation time will take longer. Given the urban area to be evacuated, street widths exiting the area's perimeter, the area's population density, average vehicle occupancy, transport mode share and crawl speed, an estimation of traffic evacuation time can be derived. Peak-hour traffic data collected at three, midblock, Mumbai sites of varying geometric features and traffic composition were used in calibrating a model that estimates peak-hour traffic flow rates. Model validation revealed a correlation coefficient of +0.98 between observed and predicted peak-hour flow rates. A methodology is developed that estimates traffic evacuation time using the model.

  11. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    DTIC Science & Technology

    2014-09-30

    172. McDonald, MA, Hildebrand, JA, and Mesnick, S (2009). Worldwide decline in tonal frequencies of blue whale songs . Endangered Species Research 9...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...estimating blue and fin whale density that is effective over large spatial scales and is designed to cope with spatial variation in animal density utilizing

  12. Density estimation of Yangtze finless porpoises using passive acoustic sensors and automated click train detection.

    PubMed

    Kimura, Satoko; Akamatsu, Tomonari; Li, Songhai; Dong, Shouyue; Dong, Lijun; Wang, Kexiong; Wang, Ding; Arai, Nobuaki

    2010-09-01

    A method is presented to estimate the density of finless porpoises using stationed passive acoustic monitoring. The number of click trains detected by stereo acoustic data loggers (A-tag) was converted to an estimate of the density of porpoises. First, an automated off-line filter was developed to detect a click train among noise, and the detection and false-alarm rates were calculated. Second, a density estimation model was proposed. The cue-production rate was measured by biologging experiments. The probability of detecting a cue and the area size were calculated from the source level, beam patterns, and a sound-propagation model. The effect of group size on the cue-detection rate was examined. Third, the proposed model was applied to estimate the density of finless porpoises at four locations from the Yangtze River to the inside of Poyang Lake. The estimated mean density of porpoises in a day decreased from the main stream to the lake. Long-term monitoring during 466 days from June 2007 to May 2009 showed variation in the density 0-4.79. However, the density was fewer than 1 porpoise/km(2) during 94% of the period. These results suggest a potential gap and seasonal migration of the population in the bottleneck of Poyang Lake.

  13. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    PubMed

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  14. Inferring Lower Boundary Driving Conditions Using Vector Magnetic Field Observations

    NASA Technical Reports Server (NTRS)

    Schuck, Peter W.; Linton, Mark; Leake, James; MacNeice, Peter; Allred, Joel

    2012-01-01

    Low-beta coronal MHD simulations of realistic CME events require the detailed specification of the magnetic fields, velocities, densities, temperatures, etc., in the low corona. Presently, the most accurate estimates of solar vector magnetic fields are made in the high-beta photosphere. Several techniques have been developed that provide accurate estimates of the associated photospheric plasma velocities such as the Differential Affine Velocity Estimator for Vector Magnetograms and the Poloidal/Toroidal Decomposition. Nominally, these velocities are consistent with the evolution of the radial magnetic field. To evolve the tangential magnetic field radial gradients must be specified. In addition to estimating the photospheric vector magnetic and velocity fields, a further challenge involves incorporating these fields into an MHD simulation. The simulation boundary must be driven, consistent with the numerical boundary equations, with the goal of accurately reproducing the observed magnetic fields and estimated velocities at some height within the simulation. Even if this goal is achieved, many unanswered questions remain. How can the photospheric magnetic fields and velocities be propagated to the low corona through the transition region? At what cadence must we observe the photosphere to realistically simulate the corona? How do we model the magnetic fields and plasma velocities in the quiet Sun? How sensitive are the solutions to other unknowns that must be specified, such as the global solar magnetic field, and the photospheric temperature and density?

  15. Cosmological perturbation theory and the spherical collapse model - II. Non-Gaussian initial conditions

    NASA Astrophysics Data System (ADS)

    Gaztanaga, Enrique; Fosalba, Pablo

    1998-12-01

    In Paper I of this series, we introduced the spherical collapse (SC) approximation in Lagrangian space as a way of estimating the cumulants xi_J of density fluctuations in cosmological perturbation theory (PT). Within this approximation, the dynamics is decoupled from the statistics of the initial conditions, so we are able to present here the cumulants for generic non-Gaussian initial conditions, which can be estimated to arbitrary order including the smoothing effects. The SC model turns out to recover the exact leading-order non-linear contributions up to terms involving non-local integrals of the J-point functions. We argue that for the hierarchical ratios S_J, these non-local terms are subdominant and tend to compensate each other. The resulting predictions show a non-trivial time evolution that can be used to discriminate between models of structure formation. We compare these analytic results with non-Gaussian N-body simulations, which turn out to be in very good agreement up to scales where sigma<~1.

  16. Large-Scale Hybrid Density Functional Theory Calculations in the Condensed-Phase: Ab Initio Molecular Dynamics in the Isobaric-Isothermal Ensemble

    NASA Astrophysics Data System (ADS)

    Ko, Hsin-Yu; Santra, Biswajit; Distasio, Robert A., Jr.; Wu, Xifan; Car, Roberto

    Hybrid functionals are known to alleviate the self-interaction error in density functional theory (DFT) and provide a more accurate description of the electronic structure of molecules and materials. However, hybrid DFT in the condensed-phase has a prohibitively high associated computational cost which limits their applicability to large systems of interest. In this work, we present a general-purpose order(N) implementation of hybrid DFT in the condensed-phase using Maximally localized Wannier function; this implementation is optimized for massively parallel computing architectures. This algorithm is used to perform large-scale ab initio molecular dynamics simulations of liquid water, ice, and aqueous ionic solutions. We have performed simulations in the isothermal-isobaric ensemble to quantify the effects of exact exchange on the equilibrium density properties of water at different thermodynamic conditions. We find that the anomalous density difference between ice I h and liquid water at ambient conditions as well as the enthalpy differences between ice I h, II, and III phases at the experimental triple point (238 K and 20 Kbar) are significantly improved using hybrid DFT over previous estimates using the lower rungs of DFT This work has been supported by the Department of Energy under Grants No. DE-FG02-05ER46201 and DE-SC0008626.

  17. Urban climate modifies tree growth in Berlin

    NASA Astrophysics Data System (ADS)

    Dahlhausen, Jens; Rötzer, Thomas; Biber, Peter; Uhl, Enno; Pretzsch, Hans

    2017-12-01

    Climate, e.g., air temperature and precipitation, differs strongly between urban and peripheral areas, which causes diverse life conditions for trees. In order to compare tree growth, we sampled in total 252 small-leaved lime trees (Tilia cordata Mill) in the city of Berlin along a gradient from the city center to the surroundings. By means of increment cores, we are able to trace back their growth for the last 50 to 100 years. A general growth trend can be shown by comparing recent basal area growth with estimates from extrapolating a growth function that had been fitted with growth data from earlier years. Estimating a linear model, we show that air temperature and precipitation significantly influence tree growth within the last 20 years. Under consideration of housing density, the results reveal that higher air temperature and less precipitation led to higher growth rates in high-dense areas, but not in low-dense areas. In addition, our data reveal a significantly higher variance of the ring width index in areas with medium housing density compared to low housing density, but no temporal trend. Transferring the results to forest stands, climate change is expected to lead to higher tree growth rates.

  18. Urban climate modifies tree growth in Berlin

    NASA Astrophysics Data System (ADS)

    Dahlhausen, Jens; Rötzer, Thomas; Biber, Peter; Uhl, Enno; Pretzsch, Hans

    2018-05-01

    Climate, e.g., air temperature and precipitation, differs strongly between urban and peripheral areas, which causes diverse life conditions for trees. In order to compare tree growth, we sampled in total 252 small-leaved lime trees ( Tilia cordata Mill) in the city of Berlin along a gradient from the city center to the surroundings. By means of increment cores, we are able to trace back their growth for the last 50 to 100 years. A general growth trend can be shown by comparing recent basal area growth with estimates from extrapolating a growth function that had been fitted with growth data from earlier years. Estimating a linear model, we show that air temperature and precipitation significantly influence tree growth within the last 20 years. Under consideration of housing density, the results reveal that higher air temperature and less precipitation led to higher growth rates in high-dense areas, but not in low-dense areas. In addition, our data reveal a significantly higher variance of the ring width index in areas with medium housing density compared to low housing density, but no temporal trend. Transferring the results to forest stands, climate change is expected to lead to higher tree growth rates.

  19. Urban climate modifies tree growth in Berlin.

    PubMed

    Dahlhausen, Jens; Rötzer, Thomas; Biber, Peter; Uhl, Enno; Pretzsch, Hans

    2018-05-01

    Climate, e.g., air temperature and precipitation, differs strongly between urban and peripheral areas, which causes diverse life conditions for trees. In order to compare tree growth, we sampled in total 252 small-leaved lime trees (Tilia cordata Mill) in the city of Berlin along a gradient from the city center to the surroundings. By means of increment cores, we are able to trace back their growth for the last 50 to 100 years. A general growth trend can be shown by comparing recent basal area growth with estimates from extrapolating a growth function that had been fitted with growth data from earlier years. Estimating a linear model, we show that air temperature and precipitation significantly influence tree growth within the last 20 years. Under consideration of housing density, the results reveal that higher air temperature and less precipitation led to higher growth rates in high-dense areas, but not in low-dense areas. In addition, our data reveal a significantly higher variance of the ring width index in areas with medium housing density compared to low housing density, but no temporal trend. Transferring the results to forest stands, climate change is expected to lead to higher tree growth rates.

  20. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.

  1. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006

    PubMed Central

    Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.

    2016-01-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418

  2. Ring profiler: a new method for estimating tree-ring density for improved estimates of carbon storage

    Treesearch

    David W. Vahey; C. Tim Scott; J.Y. Zhu; Kenneth E. Skog

    2012-01-01

    Methods for estimating present and future carbon storage in trees and forests rely on measurements or estimates of tree volume or volume growth multiplied by specific gravity. Wood density can vary by tree ring and height in a tree. If data on density by tree ring could be obtained and linked to tree size and stand characteristics, it would be possible to more...

  3. Bayesian nonparametric regression with varying residual density

    PubMed Central

    Pati, Debdeep; Dunson, David B.

    2013-01-01

    We consider the problem of robust Bayesian inference on the mean regression function allowing the residual density to change flexibly with predictors. The proposed class of models is based on a Gaussian process prior for the mean regression function and mixtures of Gaussians for the collection of residual densities indexed by predictors. Initially considering the homoscedastic case, we propose priors for the residual density based on probit stick-breaking (PSB) scale mixtures and symmetrized PSB (sPSB) location-scale mixtures. Both priors restrict the residual density to be symmetric about zero, with the sPSB prior more flexible in allowing multimodal densities. We provide sufficient conditions to ensure strong posterior consistency in estimating the regression function under the sPSB prior, generalizing existing theory focused on parametric residual distributions. The PSB and sPSB priors are generalized to allow residual densities to change nonparametrically with predictors through incorporating Gaussian processes in the stick-breaking components. This leads to a robust Bayesian regression procedure that automatically down-weights outliers and influential observations in a locally-adaptive manner. Posterior computation relies on an efficient data augmentation exact block Gibbs sampler. The methods are illustrated using simulated and real data applications. PMID:24465053

  4. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Treesearch

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  5. Trunk density profile estimates from dual X-ray absorptiometry.

    PubMed

    Wicke, Jason; Dumas, Geneviève A; Costigan, Patrick A

    2008-01-01

    Accurate body segment parameters are necessary to estimate joint loads when using biomechanical models. Geometric methods can provide individualized data for these models but the accuracy of the geometric methods depends on accurate segment density estimates. The trunk, which is important in many biomechanical models, has the largest variability in density along its length. Therefore, the objectives of this study were to: (1) develop a new method for modeling trunk density profiles based on dual X-ray absorptiometry (DXA) and (2) develop a trunk density function for college-aged females and males that can be used in geometric methods. To this end, the density profiles of 25 females and 24 males were determined by combining the measurements from a photogrammetric method and DXA readings. A discrete Fourier transformation was then used to develop the density functions for each sex. The individual density and average density profiles compare well with the literature. There were distinct differences between the profiles of two of participants (one female and one male), and the average for their sex. It is believed that the variations in these two participants' density profiles were a result of the amount and distribution of fat they possessed. Further studies are needed to support this possibility. The new density functions eliminate the uniform density assumption associated with some geometric models thus providing more accurate trunk segment parameter estimates. In turn, more accurate moments and forces can be estimated for the kinetic analyses of certain human movements.

  6. Effects of LiDAR point density and landscape context on the retrieval of urban forest biomass

    NASA Astrophysics Data System (ADS)

    Singh, K. K.; Chen, G.; McCarter, J. B.; Meentemeyer, R. K.

    2014-12-01

    Light Detection and Ranging (LiDAR), as an alternative to conventional optical remote sensing, is being increasingly used to accurately estimate aboveground forest biomass ranging from individual tree to stand levels. Recent advancements in LiDAR technology have resulted in higher point densities and better data accuracies, which however pose challenges to the procurement and processing of LiDAR data for large-area assessments. Reducing point density cuts data acquisition costs and overcome computational challenges for broad-scale forest management. However, how does that impact the accuracy of biomass estimation in an urban environment containing a great level of anthropogenic disturbances? The main goal of this study is to evaluate the effects of LiDAR point density on the biomass estimation of remnant forests in the rapidly urbanizing regions of Charlotte, North Carolina, USA. We used multiple linear regression to establish the statistical relationship between field-measured biomass and predictor variables (PVs) derived from LiDAR point clouds with varying densities. We compared the estimation accuracies between the general Urban Forest models (no discrimination of forest type) and the Forest Type models (evergreen, deciduous, and mixed), which was followed by quantifying the degree to which landscape context influenced biomass estimation. The explained biomass variance of Urban Forest models, adjusted R2, was fairly consistent across the reduced point densities with the highest difference of 11.5% between the 100% and 1% point densities. The combined estimates of Forest Type biomass models outperformed the Urban Forest models using two representative point densities (100% and 40%). The Urban Forest biomass model with development density of 125 m radius produced the highest adjusted R2 (0.83 and 0.82 at 100% and 40% LiDAR point densities, respectively) and the lowest RMSE values, signifying the distance impact of development on biomass estimation. Our evaluation suggests that reducing LiDAR point density is a viable solution to regional-scale forest biomass assessment without compromising the accuracy of estimation, which may further be improved using development density.

  7. Computerized image analysis: estimation of breast density on mammograms

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    2000-06-01

    An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.

  8. The stochastic spectator

    NASA Astrophysics Data System (ADS)

    Hardwick, Robert J.; Vennin, Vincent; Byrnes, Christian T.; Torrado, Jesús; Wands, David

    2017-10-01

    We study the stochastic distribution of spectator fields predicted in different slow-roll inflation backgrounds. Spectator fields have a negligible energy density during inflation but may play an important dynamical role later, even giving rise to primordial density perturbations within our observational horizon today. During de-Sitter expansion there is an equilibrium solution for the spectator field which is often used to estimate the stochastic distribution during slow-roll inflation. However slow roll only requires that the Hubble rate varies slowly compared to the Hubble time, while the time taken for the stochastic distribution to evolve to the de-Sitter equilibrium solution can be much longer than a Hubble time. We study both chaotic (monomial) and plateau inflaton potentials, with quadratic, quartic and axionic spectator fields. We give an adiabaticity condition for the spectator field distribution to relax to the de-Sitter equilibrium, and find that the de-Sitter approximation is never a reliable estimate for the typical distribution at the end of inflation for a quadratic spectator during monomial inflation. The existence of an adiabatic regime at early times can erase the dependence on initial conditions of the final distribution of field values. In these cases, spectator fields acquire sub-Planckian expectation values. Otherwise spectator fields may acquire much larger field displacements than suggested by the de-Sitter equilibrium solution. We quantify the information about initial conditions that can be obtained from the final field distribution. Our results may have important consequences for the viability of spectator models for the origin of structure, such as the simplest curvaton models.

  9. Topics in global convergence of density estimates

    NASA Technical Reports Server (NTRS)

    Devroye, L.

    1982-01-01

    The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.

  10. Relationships between watershed emergy flow and coastal New England salt marsh structure, function, and condition.

    PubMed

    Brandt-Williams, Sherry; Wigand, Cathleen; Campbell, Daniel E

    2013-02-01

    This study evaluated the link between watershed activities and salt marsh structure, function, and condition using spatial emergy flow density (areal empower density) in the watershed and field data from 10 tidal salt marshes in Narragansett Bay, RI, USA. The field-collected data were obtained during several years of vegetation, invertebrate, soil, and water quality sampling. The use of emergy as an accounting mechanism allowed disparate factors (e.g., the amount of building construction and the consumption of electricity) to be combined into a single landscape index while retaining a uniform quantitative definition of the intensity of landscape development. It expanded upon typical land use percentage studies by weighting each category for the intensity of development. At the RI salt marsh sites, an impact index (watershed emergy flow normalized for marsh area) showed significant correlations with mudflat infauna species richness, mussel density, plant species richness, the extent and density of dominant plant species, and denitrification potential within the high salt marsh. Over the 4-year period examined, a loading index (watershed emergy flow normalized for watershed area) showed significant correlations with nitrite and nitrate concentrations, as well as with the nitrogen to phosphorus ratios in stream discharge into the marshes. Both the emergy impact and loading indices were significantly correlated with a salt marsh condition index derived from intensive field-based assessments. Comparison of the emergy indices to calculated nitrogen loading estimates for each watershed also produced significant positive correlations. These results suggest that watershed emergy flow is a robust index of human disturbance and a potential tool for rapid assessment of coastal wetland condition.

  11. Comparison and continuous estimates of fecal coliform and Escherichia coli bacteria in selected Kansas streams, May 1999 through April 2002

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Ziegler, Andrew C.

    2003-01-01

    The sanitary quality of water and its use as a public-water supply and for recreational activities, such as swimming, wading, boating, and fishing, can be evaluated on the basis of fecal coliform and Escherichia coli (E. coli) bacteria densities. This report describes the overall sanitary quality of surface water in selected Kansas streams, the relation between fecal coliform and E. coli, the relation between turbidity and bacteria densities, and how continuous bacteria estimates can be used to evaluate the water-quality conditions in selected Kansas streams. Samples for fecal coliform and E. coli were collected at 28 surface-water sites in Kansas. Of the 318 samples collected, 18 percent exceeded the current Kansas Department of Health and Environment (KDHE) secondary contact recreational, single-sample criterion for fecal coliform (2,000 colonies per 100 milliliters of water). Of the 219 samples collected during the recreation months (April 1 through October 31), 21 percent exceeded the current (2003) KDHE single-sample fecal coliform criterion for secondary contact rec-reation (2,000 colonies per 100 milliliters of water) and 36 percent exceeded the U.S. Environmental Protection Agency (USEPA) recommended single-sample primary contact recreational criterion for E. coli (576 colonies per 100 milliliters of water). Comparisons of fecal coliform and E. coli criteria indicated that more than one-half of the streams sampled could exceed USEPA recommended E. coli criteria more frequently than the current KDHE fecal coliform criteria. In addition, the ratios of E. coli to fecal coliform (EC/FC) were smallest for sites with slightly saline water (specific conductance greater than 1,000 microsiemens per centimeter at 25 degrees Celsius), indicating that E. coli may not be a good indicator of sanitary quality for those streams. Enterococci bacteria may provide a more accurate assessment of the potential for swimming-related illnesses in these streams. Ratios of EC/FC and linear regression models were developed for estimating E. coli densities on the basis of measured fecal coliform densities for six individual and six groups of surface-water sites. Regression models developed for the six individual surface-water sites and six groups of sites explain at least 89 percent of the variability in E. coli densities. The EC/FC ratios and regression models are site specific and make it possible to convert historic fecal coliform bacteria data to estimated E. coli densities for the selected sites. The EC/FC ratios can be used to estimate E. coli for any range of historical fecal coliform densities, and in some cases with less error than the regression models. The basin- and statewide regression models explained at least 93 percent of the variance and best represent the sites where a majority of the data used to develop the models were collected (Kansas and Little Arkansas Basins). Comparison of the current (2003) KDHE geometric-mean primary contact criterion for fecal coliform bacteria of 200 col/100 mL to the 2002 USEPA recommended geometric-mean criterion of 126 col/100 mL for E. coli results in an EC/FC ratio of 0.63. The geometric-mean EC/FC ratio for all sites except Rattlesnake Creek (site 21) is 0.77, indicating that considerably more than 63 percent of the fecal coliform is E. coli. This potentially could lead to more exceedances of the recommended E. coli criterion, where the water now meets the current (2003) 200-col/100 mL fecal coliform criterion. In this report, turbidity was found to be a reliable estimator of bacteria densities. Regression models are provided for estimating fecal coliform and E. coli bacteria densities using continuous turbidity measurements. Prediction intervals also are provided to show the uncertainty associated with using the regression models. Eighty percent of all measured sample densities and individual turbidity-based estimates from the regression models were in agreement as exceedi

  12. Compaction of forest soil by logging machinery favours occurrence of prokaryotes.

    PubMed

    Schnurr-Pütz, Silvia; Bååth, Erland; Guggenberger, Georg; Drake, Harold L; Küsel, Kirsten

    2006-12-01

    Soil compaction caused by passage of logging machinery reduces the soil air capacity. Changed abiotic factors might induce a change in the soil microbial community and favour organisms capable of tolerating anoxic conditions. The goals of this study were to resolve differences between soil microbial communities obtained from wheel-tracks (i.e. compacted) and their adjacent undisturbed sites, and to evaluate differences in potential anaerobic microbial activities of these contrasting soils. Soil samples obtained from compacted soil had a greater bulk density and a higher pH than uncompacted soil. Analyses of phospholipid fatty acids demonstrated that the eukaryotic/prokaryotic ratio in compacted soils was lower than that of uncompacted soils, suggesting that fungi were not favoured by the in situ conditions produced by compaction. Indeed, most-probable-number (MPN) estimates of nitrous oxide-producing denitrifiers, acetate- and lactate-utilizing iron and sulfate reducers, and methanogens were higher in compacted than in uncompacted soils obtained from one site that had large differences in bulk density. Compacted soils from this site yielded higher iron-reducing, sulfate-reducing and methanogenic potentials than did uncompacted soils. MPN estimates of H2-utilizing acetogens in compacted and uncompacted soils were similar. These results indicate that compaction of forest soil alters the structure and function of the soil microbial community and favours occurrence of prokaryotes.

  13. Novel Proximal Sensing for Monitoring Soil Organic C Stocks and Condition.

    PubMed

    Viscarra Rossel, Raphael A; Lobsey, Craig R; Sharman, Chris; Flick, Paul; McLachlan, Gordon

    2017-05-16

    Soil information is needed for environmental monitoring to address current concerns over food, water and energy securities, land degradation, and climate change. We developed the Soil Condition ANalysis System (SCANS) to help address these needs. It integrates an automated soil core sensing system (CSS) with statistical analytics and modeling to characterize soil at fine depth resolutions and across landscapes. The CSS's sensors include a γ-ray attenuation densitometer to measure bulk density, digital cameras to image the measured soil, and a visible-near-infrared (vis-NIR) spectrometer to measure iron oxides and clay mineralogy. The spectra are also modeled to estimate total soil organic carbon (C), particulate, humus, and resistant organic C (POC, HOC, and ROC, respectively), clay content, cation exchange capacity (CEC), pH, volumetric water content, available water capacity (AWC), and their uncertainties. Measurements of bulk density and organic C are combined to estimate C stocks. Kalman smoothing is used to derive complete soil property profiles with propagated uncertainties. The SCANS provides rapid, precise, quantitative, and spatially explicit information about the properties of soil profiles with a level of detail that is difficult to obtain with other approaches. The information gained effectively deepens our understanding of soil and calls attention to the central role soil plays in our environment.

  14. Compensating effect of sap velocity for stand density leads to uniform hillslope-scale forest transpiration across a steep valley cross-section

    NASA Astrophysics Data System (ADS)

    Renner, Maik; Hassler, Sibylle; Blume, Theresa; Weiler, Markus; Hildebrandt, Anke; Guderle, Marcus; Schymanski, Stan; Kleidon, Axel

    2016-04-01

    Roberts (1983) found that forest transpiration is relatively uniform across different climatic conditions and suggested that forest transpiration is a conservative process compensating for environmental heterogeneity. Here we test this hypothesis at a steep valley cross-section composed of European Beech in the Attert basin in Luxemburg. We use sapflow, soil moisture, biometric and meteorological data from 6 sites along a transect to estimate site scale transpiration rates. Despite opposing hillslope orientation, different slope angles and forest stand structures, we estimated relatively similar transpiration responses to atmospheric demand and seasonal transpiration totals. This similarity is related to a negative correlation between sap velocity and site-average sapwood area. At the south facing sites with an old, even-aged stand structure and closed canopy layer, we observe significantly lower sap velocities but similar stand-average transpiration rates compared to the north-facing sites with open canopy structure, tall dominant trees and dense understorey. This suggests that plant hydraulic co-ordination allows for flexible responses to environmental conditions leading to similar transpiration rates close to the water and energy limits despite the apparent heterogeneity in exposition, stand density and soil moisture. References Roberts, J. (1983). Forest transpiration: A conservative hydrological process? Journal of Hydrology 66, 133-141.

  15. Subglacial sedimentary basin characterization of Wilkes Land, East Antarctica via applied aerogeophysical inverse methods

    NASA Astrophysics Data System (ADS)

    Frederick, B. C.; Gooch, B. T.; Richter, T.; Young, D. A.; Blankenship, D. D.; Aitken, A.; Siegert, M. J.

    2013-12-01

    Topography, sediment distribution and heat flux are all key boundary conditions governing the stability of the East Antarctic ice sheet (EAIS). Recent scientific scrutiny has been focused on several large, deep, interior EAIS basins including the submarine basal topography characterizing the Aurora Subglacial Basin (ASB). Numerical ice sheet models require accurate deformable sediment distribution and lithologic character constraints to estimate overall flow velocities and potential instability. To date, such estimates across the ASB have been derived from low-resolution satellite data or historic aerogeophysical surveys conducted prior to the advent of GPS. These rough basal condition estimates have led to poorly-constrained ice sheet stability models for this remote 200,000 sq km expanse of the ASB. Here we present a significantly improved quantitative model characterizing the subglacial lithology and sediment in the ASB region. The product of comprehensive ICECAP (2008-2013) aerogeophysical data processing, this sedimentary basin model details the expanse and thickness of probable Wilkes Land subglacial sedimentary deposits and density contrast boundaries indicative of distinct subglacial lithologic units. As part of the process, BEDMAP2 subglacial topographic results were improved through the additional incorporation of ice-penetrating radar data collected during ICECAP field seasons 2010-2013. Detailed potential field data pre-processing was completed as well as a comprehensive evaluation of crustal density contrasts based on the gravity power spectrum, a subsequent high pass data filter was also applied to remove longer crustal wavelengths from the gravity dataset prior to inversion. Gridded BEDMAP2+ ice and bed radar surfaces were then utilized to establish bounding density models for the 3D gravity inversion process to yield probable sedimentary basin anomalies. Gravity inversion results were iteratively evaluated against radar along-track RMS deviation and gravity and magnetic depth to basement results. This geophysical data processing methodology provides a substantial improvement over prior Wilkes Land sedimentary basin estimates yielding a higher resolution model based upon iteration of several aerogeophysical datasets concurrently. This more detailed subglacial sedimentary basin model for Wilkes Land, East Antarctica will not only contribute to vast improvements on EAIS ice sheet model constraints, but will also provide significant quantifiable controls for subglacial hydrologic and geothermal flux estimates that are also sizable contributors to the cold-based, deep interior basal ice dynamics dominant in the Wilkes Land region.

  16. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    PubMed

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  17. Prediction of risk of fracture in the tibia due to altered bone mineral density distribution resulting from disuse: a finite element study.

    PubMed

    Gislason, Magnus K; Coupaud, Sylvie; Sasagawa, Keisuke; Tanabe, Yuji; Purcell, Mariel; Allan, David B; Tanner, K Elizabeth

    2014-02-01

    The disuse-related bone loss that results from immobilisation following injury shares characteristics with osteoporosis in post-menopausal women and the aged, with decreases in bone mineral density leading to weakening of the bone and increased risk of fracture. The aim of this study was to use the finite element method to: (i) calculate the mechanical response of the tibia under mechanical load and (ii) estimate of the risk of fracture; comparing between two groups, an able-bodied group and spinal cord injury patients group suffering from varying degrees of bone loss. The tibiae of eight male subjects with chronic spinal cord injury and those of four able-bodied age-matched controls were scanned using multi-slice peripheral quantitative computed tomography. Images were used to develop full three-dimensional models of the tibiae in Mimics (Materialise) and exported into Abaqus (Simulia) for calculation of stress distribution and fracture risk in response to specified loading conditions - compression, bending and torsion. The percentage of elements that exceeded a calculated value of the ultimate stress provided an estimate of the risk of fracture for each subject, which differed between spinal cord injury subjects and their controls. The differences in bone mineral density distribution along the tibia in different subjects resulted in different regions of the bone being at high risk of fracture under set loading conditions, illustrating the benefit of creating individual material distribution models. A predictive tool can be developed based on these models, to enable clinicians to estimate the amount of loading that can be safely allowed onto the skeletal frame of individual patients who suffer from extensive musculoskeletal degeneration (including spinal cord injury, multiple sclerosis and the ageing population). The ultimate aim is to reduce fracture occurrence in these vulnerable groups.

  18. Simple Form of MMSE Estimator for Super-Gaussian Prior Densities

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-04-01

    The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.

  19. Simplified, rapid, and inexpensive estimation of water primary productivity based on chlorophyll fluorescence parameter Fo.

    PubMed

    Chen, Hui; Zhou, Wei; Chen, Weixian; Xie, Wei; Jiang, Liping; Liang, Qinlang; Huang, Mingjun; Wu, Zongwen; Wang, Qiang

    2017-04-01

    Primary productivity in water environment relies on the photosynthetic production of microalgae. Chlorophyll fluorescence is widely used to detect the growth status and photosynthetic efficiency of microalgae. In this study, a method was established to determine the Chl a content, cell density of microalgae, and water primary productivity by measuring chlorophyll fluorescence parameter Fo. A significant linear relationship between chlorophyll fluorescence parameter Fo and Chl a content of microalgae, as well as between Fo and cell density, was observed under pure-culture conditions. Furthermore, water samples collected from natural aquaculture ponds were used to validate the correlation between Fo and water primary productivity, which is closely related to Chl a content in water. Thus, for a given pure culture of microalgae or phytoplankton (mainly microalgae) in aquaculture ponds or other natural ponds for which the relationship between the Fo value and Chl a content or cell density could be established, Chl a content or cell density could be determined by measuring the Fo value, thereby making it possible to calculate the water primary productivity. It is believed that this method can provide a convenient way of efficiently estimating the primary productivity in natural aquaculture ponds and bringing economic value in limnetic ecology assessment, as well as in algal bloom monitoring. Copyright © 2017 Elsevier GmbH. All rights reserved.

  20. A regularized auxiliary particle filtering approach for system state estimation and battery life prediction

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wang, Wilson; Ma, Fai

    2011-07-01

    System current state estimation (or condition monitoring) and future state prediction (or failure prognostics) constitute the core elements of condition-based maintenance programs. For complex systems whose internal state variables are either inaccessible to sensors or hard to measure under normal operational conditions, inference has to be made from indirect measurements using approaches such as Bayesian learning. In recent years, the auxiliary particle filter (APF) has gained popularity in Bayesian state estimation; the APF technique, however, has some potential limitations in real-world applications. For example, the diversity of the particles may deteriorate when the process noise is small, and the variance of the importance weights could become extremely large when the likelihood varies dramatically over the prior. To tackle these problems, a regularized auxiliary particle filter (RAPF) is developed in this paper for system state estimation and forecasting. This RAPF aims to improve the performance of the APF through two innovative steps: (1) regularize the approximating empirical density and redraw samples from a continuous distribution so as to diversify the particles; and (2) smooth out the rather diffused proposals by a rejection/resampling approach so as to improve the robustness of particle filtering. The effectiveness of the proposed RAPF technique is evaluated through simulations of a nonlinear/non-Gaussian benchmark model for state estimation. It is also implemented for a real application in the remaining useful life (RUL) prediction of lithium-ion batteries.

  1. Estimate of radiation damage to low-level electronics of the RF system in the LHC cavities arising from beam gas collisions.

    PubMed

    Butterworth, A; Ferrari, A; Tsoulou, E; Vlachoudis, V; Wijnands, T

    2005-01-01

    Monte Carlo simulations have been performed to estimate the radiation damage induced by high-energy hadrons in the digital electronics of the RF low-level systems in the LHC cavities. High-energy hadrons are generated when the proton beams interact with the residual gas. The contributions from various elements-vacuum chambers, cryogenic cavities, wideband pickups and cryomodule beam tubes-have been considered individually, with each contribution depending on the gas composition and density. The probability of displacement damage and single event effects (mainly single event upsets) is derived for the LHC start-up conditions.

  2. Determination of variability in leaf biomass densities of conifers and mixed conifers under different environmental conditions in the San Joaquin Valley air basin. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temple, P.J.; Mutters, R.J.; Adams, C.

    1995-06-01

    Biomass sampling plots were established at 29 locations within the dominant vegetation zones of the study area. Estimates of foliar biomass were made for each plot by three independent methods: regression analysis on the basis of tree diameter, calculation of the amount of light intercepted by the leaf canopy, and extrapolation from branch leaf area. Multivariate regression analysis was used to relate these foliar biomass estimates for oak plots and conifer plots to several independent predictor variables, including elevation, slope, aspect, temperature, precipitation, and soil chemical characteristics.

  3. A spatially explicit capture-recapture estimator for single-catch traps.

    PubMed

    Distiller, Greg; Borchers, David L

    2015-11-01

    Single-catch traps are frequently used in live-trapping studies of small mammals. Thus far, a likelihood for single-catch traps has proven elusive and usually the likelihood for multicatch traps is used for spatially explicit capture-recapture (SECR) analyses of such data. Previous work found the multicatch likelihood to provide a robust estimator of average density. We build on a recently developed continuous-time model for SECR to derive a likelihood for single-catch traps. We use this to develop an estimator based on observed capture times and compare its performance by simulation to that of the multicatch estimator for various scenarios with nonconstant density surfaces. While the multicatch estimator is found to be a surprisingly robust estimator of average density, its performance deteriorates with high trap saturation and increasing density gradients. Moreover, it is found to be a poor estimator of the height of the detection function. By contrast, the single-catch estimators of density, distribution, and detection function parameters are found to be unbiased or nearly unbiased in all scenarios considered. This gain comes at the cost of higher variance. If there is no interest in interpreting the detection function parameters themselves, and if density is expected to be fairly constant over the survey region, then the multicatch estimator performs well with single-catch traps. However if accurate estimation of the detection function is of interest, or if density is expected to vary substantially in space, then there is merit in using the single-catch estimator when trap saturation is above about 60%. The estimator's performance is improved if care is taken to place traps so as to span the range of variables that affect animal distribution. As a single-catch likelihood with unknown capture times remains intractable for now, researchers using single-catch traps should aim to incorporate timing devices with their traps.

  4. Health conditions in rural areas with high livestock density: Analysis of seven consecutive years.

    PubMed

    van Dijk, Christel E; Zock, Jan-Paul; Baliatsas, Christos; Smit, Lidwien A M; Borlée, Floor; Spreeuwenberg, Peter; Heederik, Dick; Yzermans, C Joris

    2017-03-01

    Previous studies investigating health conditions of individuals living near livestock farms generally assessed short time windows. We aimed to take time-specific differences into account and to compare the prevalence of various health conditions over seven consecutive years. The sample consisted of 156,690 individuals registered in 33 general practices in a (rural) area with a high livestock density and 101,015 patients from 23 practices in other (control) areas in the Netherlands. Prevalence of health conditions were assessed using 2007-2013 electronic health record (EHR) data. Two methods were employed to assess exposure: 1) Comparisons between the study and control areas in relation to health problems, 2) Use of individual estimates of livestock exposure (in the study area) based on Geographic Information System (GIS) data. A higher prevalence of chronic bronchitis/bronchiectasis, lower respiratory tract infections and vertiginous syndrome and lower prevalence of respiratory symptoms and emphysema/COPD was found in the study area compared with the control area. A shorter distance to the nearest farm was associated with a lower prevalence of upper respiratory tract infections, respiratory symptoms, asthma, COPD/emphysema, allergic rhinitis, depression, eczema, vertiginous syndrome, dizziness and gastrointestinal infections. Especially exposure to cattle was associated with less health conditions. Living within 500m of mink farms was associated with increased chronic enteritis/ulcerative colitis. Livestock-related exposures did not seem to be an environmental risk factor for the occurrence of health conditions. Nevertheless, lower respiratory tract infections, chronic bronchitis and vertiginous syndrome were more common in the area with a high livestock density. The association between exposure to minks and chronic enteritis/ulcerative colitis remains to be elucidated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Density of jadeite melt under upper mantle conditions from in-situ X-ray micro-tomography measurements

    NASA Astrophysics Data System (ADS)

    Jing, Z.; Xu, M.; Jiang, P.; Yu, T.; Wang, Y.

    2017-12-01

    Knowledge of the density of silicate melts under high pressure conditions is important to our understanding of the stability and migration of melt layers in the Earth's deep mantle. A wide range of silicate melts have been studied at high pressures using the sink/float technique (e.g., Agee and Walker, 1988) and the X-ray absorption technique (e.g., Sakamaki et al, 2009). However, the effect of the Na2O component on high-pressure melt density has not been fully quantified, despite its likely presence in mantle melts. This is partly due to the experimental challenges that the Na-bearing melts often have relatively low density but high viscosity, both of which make it difficult to study using the above-mentioned techniques. In this study, we have developed a new technique based on X-ray micro-tomography to determine the density of melts at high pressures. In this technique, the volume of a melt is directly measured from the reconstructed 3-D images of the sample using computed X-ray micro-tomography. If the mass of the sample is measured using a balance or estimated from a reference density, then the density of the melt at high pressures can be calculated. Using this technique, we determined the density of jadeite melt (NaAlSi2O6) at high pressures up to 4 GPa in a Paris-Edinburg cell that can be rotated for 180 degrees under pressure. Results show that the Na2O component significantly decreases both the density and bulk modulus of silicate melts at high pressures. These data can be incorporated into a hard-sphere equation of state (Jing and Karato, 2011) to model the effect of the Na2O component on the potential density crossovers between melts produced in the mantle and the residual solid.

  6. Depth dependency of neutron density produced by cosmic rays in the lunar subsurface

    NASA Astrophysics Data System (ADS)

    Ota, S.; Sihver, L.; Kobayashi, S.; Hasebe, N.

    2014-11-01

    Depth dependency of neutrons produced by cosmic rays (CRs) in the lunar subsurface was estimated using the three-dimensional Monte Carlo particle and heavy ion transport simulation code, PHITS, incorporating the latest high energy nuclear data, JENDL/HE-2007. The PHITS simulations of equilibrium neutron density profiles in the lunar subsurface were compared with the measurement by Apollo 17 Lunar Neutron Probe Experiment (LNPE). Our calculations reproduced the LNPE data except for the 350-400 mg/cm2 region under the improved condition using the CR spectra model based on the latest observations, well-tested nuclear interaction models with systematic cross section data, and JENDL/HE-2007.

  7. Can a Penning ionization discharge simulate the tokamak scrape-off plasma conditions?

    NASA Technical Reports Server (NTRS)

    Finkenthal, M.; Littman, A.; Stutman, D.; Kovnovich, S.; Mandelbaum, P.; Schwob, J. L.; Bhatia, A. K.

    1990-01-01

    The tokamak scrape-off (the region between the vacuum vessel wall and the magnetically confined fusion plasma edge), represents a source/sink for the hot fusion plasma. The electron densities and temperatures are in the ranges 10 to the 11th - 10 to the 13th/cu cm and 1-40 eV, respectively (depending on the size, magnetic field intensity and configuration, plasma current, etc). In the work reported, the electron temperature and density have been estimated in a Penning ionization discharge by comparing its spectroscopic emission in the VUV with that predicted by a collisional radiative model. An attempt to directly compare this emission with that of the tokamak edge is briefly described.

  8. Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.

    PubMed

    Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni

    2018-06-15

    Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.

  9. Simulation study of geometric shape factor approach to estimating earth emitted flux densities from wide field-of-view radiation measurements

    NASA Technical Reports Server (NTRS)

    Weaver, W. L.; Green, R. N.

    1980-01-01

    A study was performed on the use of geometric shape factors to estimate earth-emitted flux densities from radiation measurements with wide field-of-view flat-plate radiometers on satellites. Sets of simulated irradiance measurements were computed for unrestricted and restricted field-of-view detectors. In these simulations, the earth radiation field was modeled using data from Nimbus 2 and 3. Geometric shape factors were derived and applied to these data to estimate flux densities on global and zonal scales. For measurements at a satellite altitude of 600 km, estimates of zonal flux density were in error 1.0 to 1.2%, and global flux density errors were less than 0.2%. Estimates with unrestricted field-of-view detectors were about the same for Lambertian and non-Lambertian radiation models, but were affected by satellite altitude. The opposite was found for the restricted field-of-view detectors.

  10. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  11. Cetacean population density estimation from single fixed sensors using passive acoustics.

    PubMed

    Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica

    2011-06-01

    Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America

  12. Task-oriented comparison of power spectral density estimation methods for quantifying acoustic attenuation in diagnostic ultrasound using a reference phantom method.

    PubMed

    Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A

    2013-07-01

    Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.

  13. Density dependence governs when population responses to multiple stressors are magnified or mitigated.

    PubMed

    Hodgson, Emma E; Essington, Timothy E; Halpern, Benjamin S

    2017-10-01

    Population endangerment typically arises from multiple, potentially interacting anthropogenic stressors. Extensive research has investigated the consequences of multiple stressors on organisms, frequently focusing on individual life stages. Less is known about population-level consequences of exposure to multiple stressors, especially when exposure varies through life. We provide the first theoretical basis for identifying species at risk of magnified effects from multiple stressors across life history. By applying a population modeling framework, we reveal conditions under which population responses from stressors applied to distinct life stages are either magnified (synergistic) or mitigated. We find that magnification or mitigation critically depends on the shape of density dependence, but not the life stage in which it occurs. Stressors are always magnified when density dependence is linear or concave, and magnified or mitigated when it is convex. Using Bayesian numerical methods, we estimated the shape of density dependence for eight species across diverse taxa, finding support for all three shapes. © 2017 by the Ecological Society of America.

  14. Roosevelt elk density in old-growth forests of Olympic National Park

    USGS Publications Warehouse

    Houston, D.B.; Moorhead, Bruce B.; Olson, R.W.

    1987-01-01

    We explored the feasibility of censusing Roosevelt elk from a helicopter in the dense old growth forests of Olympic National Park. WA. Mean observed densities ranged from 8.0-11.6 elk/km2, with coefficients of variation averaging 19.9 percent. A provisional sightability factor of 74 percent suggested that actual mean densities ranged from 10.8-16.0 elk/km2. We conclude that estimates of elk density probably could be refined, but not without a cost and level of disturbance in the park that seem unwarranted at present. The effort required to conduct the 18 counts made during 1985-86 was substantial. For almost every successful count an unsuccessful attempt was made. These included aborting five flights when counting conditions turned sour. Actual counting time for the successful flights was 15.7 and 16.2 hours in 1985 and 1986, respectively. Additional flight time for traveling to and from the census zones, refueling, and aborted attempts added 12.2 and 10.8 hours for the respective years

  15. Suitability of Coastal Marshes as Whooping Crane Foraging Habitat in Southwest Louisiana, USA

    USGS Publications Warehouse

    King, Sammy L.; Kang, Sung-Ryong

    2014-01-01

    Foraging habitat conditions (i.e., water depth, prey biomass, digestible energy density) can be a significant predictor of foraging habitat selection by wading birds. Potential foraging habitats of Whooping Cranes (Grus americana) using marshes include ponds and emergent marsh, but the potential prey and energy availability in these habitat types have rarely been studied. In this study, we estimated daily digestible energy density for Whooping Cranes in different marsh and microhabitat types (i.e., pond, flooded emergent marsh). Also, indicator metrics of foraging habitat suitability for Whooping Cranes were developed based on seasonal water depth, prey biomass, and digestible energy density. Seasonal water depth (cm), prey biomass (g wet weight m-2), and digestible energy density (kcal g-1m-2) ranged from 0.0 to 50.2 ± 2.8, 0.0 to 44.8 ± 22.3, and 0.0 to 31.0 ± 15.3, respectively. With the exception of freshwater emergent marsh in summer, all available habitats were capable of supporting one Whooping Crane per 0.1 ha per day. All habitat types in the marshes had relatively higher suitability in spring and summer than in fall and winter. Our study indicates that based on general energy availability, freshwater marshes in the region can support Whooping Cranes in a relatively small area, particularly in spring and summer. In actuality, the spatial density of ponds, the flood depth of the emergent marsh, and the habitat conditions (e.g., vegetation density) between adjacent suitable habitats will constrain suitable habitat and Whooping Crane numbers.

  16. Temporal variations of electron density and temperature in Kr/Ne/H2 photoionized plasma induced by nanosecond pulses from extreme ultraviolet source

    NASA Astrophysics Data System (ADS)

    Saber, I.; Bartnik, A.; Wachulak, P.; Skrzeczanowski, W.; Jarocki, R.; Fiedorowicz, H.

    2017-06-01

    Spectral investigations of low-temperature photoionized plasmas created in a Kr/Ne/H2 gas mixture were performed. The low-temperature plasmas were generated by gas mixture irradiation using extreme ultraviolet pulses from a laser-plasma source. Emission spectra in the ultraviolet/visible range from the photoionized plasmas contained lines that mainly corresponded to neutral atoms and singly charged ions. Temporal variations in the plasma electron temperature and electron density were studied using different characteristic emission lines at various delay times. Results, based on Kr II lines, showed that the electron temperature decreased from 1.7 to 0.9 eV. The electron densities were estimated using different spectral lines at each delay time. In general, except for the Hβ line, in which the electron density decreased from 3.78 × 1016 cm-3 at 200 ns to 5.77 × 1015 cm-3 at 2000 ns, most of the electron density values measured from the different lines were of the order of 1015 cm-3 and decreased slightly while maintaining the same order when the delay time increased. The time dependences of the measured and simulated intensities of a spectral line of interest were also investigated. The validity of the partial or full local thermodynamic equilibrium (LTE) conditions in plasma was explained based on time-resolved electron density measurements. The partial LTE condition was satisfied for delay times in the 200 ns to 1500 ns range. The results are summarized, and the dominant basic atomic processes in the gas mixture photoionized plasma are discussed.

  17. Estimation of the neural drive to the muscle from surface electromyograms

    NASA Astrophysics Data System (ADS)

    Hofmann, David

    Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.

  18. Temporal Variation of Wood Density and Carbon in Two Elevational Sites of Pinus cooperi in Relation to Climate Response in Northern Mexico

    PubMed Central

    Pompa-García, Marín; Venegas-González, Alejandro

    2016-01-01

    Forest ecosystems play an important role in the global carbon cycle. Therefore, understanding the dynamics of carbon uptake in forest ecosystems is much needed. Pinus cooperi is a widely distributed species in the Sierra Madre Occidental in northern Mexico and future climatic variations could impact these ecosystems. Here, we analyze the variations of trunk carbon in two populations of P. cooperi situated at different elevational gradients, combining dendrochronological techniques and allometry. Carbon sequestration (50% biomass) was estimated from a specific allometric equation for this species based on: (i) variation of intra-annual wood density and (ii) diameter reconstruction. The results show that the population at a higher elevation had greater wood density, basal area, and hence, carbon accumulation. This finding can be explained by an ecological response of trees to adverse weather conditions, which would cause a change in the cellular structure affecting the within-ring wood density profile. The influence of variations in climate on the maximum density of chronologies showed a positive correlation with precipitation and the Multivariate El Niño Southern Oscillation Index during the winter season, and a negative correlation with maximum temperature during the spring season. Monitoring previous conditions to growth is crucial due to the increased vulnerability to extreme climatic variations on higher elevational sites. We concluded that temporal variability of wood density contributes to a better understanding of environmental historical changes and forest carbon dynamics in Northern Mexico, representing a significant improvement over previous studies on carbon sequestration. Assuming a uniform density according to tree age is incorrect, so this method can be used for environmental mitigation strategies, such as for managing P. cooperi, a dominant species of great ecological amplitude and widely used in forest industries. PMID:27272519

  19. Temporal Variation of Wood Density and Carbon in Two Elevational Sites of Pinus cooperi in Relation to Climate Response in Northern Mexico.

    PubMed

    Pompa-García, Marín; Venegas-González, Alejandro

    2016-01-01

    Forest ecosystems play an important role in the global carbon cycle. Therefore, understanding the dynamics of carbon uptake in forest ecosystems is much needed. Pinus cooperi is a widely distributed species in the Sierra Madre Occidental in northern Mexico and future climatic variations could impact these ecosystems. Here, we analyze the variations of trunk carbon in two populations of P. cooperi situated at different elevational gradients, combining dendrochronological techniques and allometry. Carbon sequestration (50% biomass) was estimated from a specific allometric equation for this species based on: (i) variation of intra-annual wood density and (ii) diameter reconstruction. The results show that the population at a higher elevation had greater wood density, basal area, and hence, carbon accumulation. This finding can be explained by an ecological response of trees to adverse weather conditions, which would cause a change in the cellular structure affecting the within-ring wood density profile. The influence of variations in climate on the maximum density of chronologies showed a positive correlation with precipitation and the Multivariate El Niño Southern Oscillation Index during the winter season, and a negative correlation with maximum temperature during the spring season. Monitoring previous conditions to growth is crucial due to the increased vulnerability to extreme climatic variations on higher elevational sites. We concluded that temporal variability of wood density contributes to a better understanding of environmental historical changes and forest carbon dynamics in Northern Mexico, representing a significant improvement over previous studies on carbon sequestration. Assuming a uniform density according to tree age is incorrect, so this method can be used for environmental mitigation strategies, such as for managing P. cooperi, a dominant species of great ecological amplitude and widely used in forest industries.

  20. A data driven method for estimation of B(avail) and appK(D) using a single injection protocol with [¹¹C]raclopride in the mouse.

    PubMed

    Wimberley, Catriona J; Fischer, Kristina; Reilhac, Anthonin; Pichler, Bernd J; Gregoire, Marie Claude

    2014-10-01

    The partial saturation approach (PSA) is a simple, single injection experimental protocol that will estimate both B(avail) and appK(D) without the use of blood sampling. This makes it ideal for use in longitudinal studies of neurodegenerative diseases in the rodent. The aim of this study was to increase the range and applicability of the PSA by developing a data driven strategy for determining reliable regional estimates of receptor density (B(avail)) and in vivo affinity (1/appK(D)), and validate the strategy using a simulation model. The data driven method uses a time window guided by the dynamic equilibrium state of the system as opposed to using a static time window. To test the method, simulations of partial saturation experiments were generated and validated against experimental data. The experimental conditions simulated included a range of receptor occupancy levels and three different B(avail) and appK(D) values to mimic diseases states. Also the effect of using a reference region and typical PET noise on the stability and accuracy of the estimates was investigated. The investigations showed that the parameter estimates in a simulated healthy mouse, using the data driven method were within 10±30% of the simulated input for the range of occupancy levels simulated. Throughout all experimental conditions simulated, the accuracy and robustness of the estimates using the data driven method were much improved upon the typical method of using a static time window, especially at low receptor occupancy levels. Introducing a reference region caused a bias of approximately 10% over the range of occupancy levels. Based on extensive simulated experimental conditions, it was shown the data driven method provides accurate and precise estimates of B(avail) and appK(D) for a broader range of conditions compared to the original method. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Counting whales in a challenging, changing environment

    PubMed Central

    Williams, R.; Kelly, N.; Boebel, O.; Friedlaender, A. S.; Herr, H.; Kock, K.-H.; Lehnert, L. S.; Maksym, T.; Roberts, J.; Scheidat, M.; Siebert, U.; Brierley, A. S.

    2014-01-01

    Estimating abundance of Antarctic minke whales is central to the International Whaling Commission's conservation and management work and understanding impacts of climate change on polar marine ecosystems. Detecting abundance trends is problematic, in part because minke whales are frequently sighted within Antarctic sea ice where navigational safety concerns prevent ships from surveying. Using icebreaker-supported helicopters, we conducted aerial surveys across a gradient of ice conditions to estimate minke whale density in the Weddell Sea. The surveys revealed substantial numbers of whales inside the sea ice. The Antarctic summer sea ice is undergoing rapid regional change in annual extent, distribution, and length of ice-covered season. These trends, along with substantial interannual variability in ice conditions, affect the proportion of whales available to be counted by traditional shipboard surveys. The strong association between whales and the dynamic, changing sea ice requires reexamination of the power to detect trends in whale abundance or predict ecosystem responses to climate change. PMID:24622821

  2. Woodpecker densities in the big woods of Arkansas

    USGS Publications Warehouse

    Luscier, J.D.; Krementz, David G.

    2010-01-01

    Sightings of the now-feared-extinct ivory-billed woodpecker Campephilus principalis in 2004 in the Big Woods of Arkansas initiated a series of studies on how to best manage habitat for this endangered species as well as all woodpeckers in the area. Previous work suggested that densities of other woodpeckers, particularly pileated Dryocopus pileatus and red-bellied Melanerpes carolinus woodpeckers, might be useful in characterizing habitat use by the ivory-billed woodpecker. We estimated densities of six woodpecker species in the Big Woods during the breeding seasons of 2006 and 2007 and also during the winter season of 2007. Our estimated densities were as high as or higher than previously published woodpecker density estimates for the Southeastern United States. Density estimates ranged from 9.1 to 161.3 individuals/km2 across six woodpecker species. Our data suggest that the Big Woods of Arkansas is attractive to all woodpeckers using the region, including ivory-billed woodpeckers.

  3. Adjusting forest density estimates for surveyor bias in historical tree surveys

    Treesearch

    Brice B. Hanberry; Jian Yang; John M. Kabrick; Hong S. He

    2012-01-01

    The U.S. General Land Office surveys, conducted between the late 1700s to early 1900s, provide records of trees prior to widespread European and American colonial settlement. However, potential and documented surveyor bias raises questions about the reliability of historical tree density estimates and other metrics based on density estimated from these records. In this...

  4. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  5. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  6. Heart Rate Variability Can Be Used to Estimate Sleepiness-related Decrements in Psychomotor Vigilance during Total Sleep Deprivation

    PubMed Central

    Chua, Eric Chern-Pin; Tan, Wen-Qi; Yeo, Sing-Chen; Lau, Pauline; Lee, Ivan; Mien, Ivan Ho; Puvanendran, Kathiravelu; Gooley, Joshua J.

    2012-01-01

    Study Objectives: To assess whether changes in psychomotor vigilance during sleep deprivation can be estimated using heart rate variability (HRV). Design: HRV, ocular, and electroencephalogram (EEG) measures were compared for their ability to predict lapses on the Psychomotor Vigilance Task (PVT). Setting: Chronobiology and Sleep Laboratory, Duke-NUS Graduate Medical School Singapore. Participants: Twenty-four healthy Chinese men (mean age ± SD = 25.9 ± 2.8 years). Interventions: Subjects were kept awake continuously for 40 hours under constant environmental conditions. Every 2 hours, subjects completed a 10-minute PVT to assess their ability to sustain visual attention. Measurements and Results: During each PVT, we examined the electrocardiogram (ECG), EEG, and percentage of time that the eyes were closed (PERCLOS). Similar to EEG power density and PERCLOS measures, the time course of ECG RR-interval power density in the 0.02- 0.08-Hz range correlated with the 40-hour profile of PVT lapses. Based on receiver operating characteristic curves, RR-interval power density performed as well as EEG power density at identifying a sleepiness-related increase in PVT lapses above threshold. RR-interval power density (0.02-0.08 Hz) also classified subject performance with sensitivity and specificity similar to that of PERCLOS. Conclusions: The ECG carries information about a person's vigilance state. Hence, HRV measures could potentially be used to predict when an individual is at increased risk of attentional failure. Our results suggest that HRV monitoring, either alone or in combination with other physiologic measures, could be incorporated into safety devices to warn drowsy operators when their performance is impaired. Citation: Chua ECP; Tan WQ; Yeo SC; Lau P; Lee I; Mien IH; Puvanendran K; Gooley JJ. Heart rate variability can be used to estimate sleepiness-related decrements in psychomotor vigilance during total sleep deprivation. SLEEP 2012;35(3):325-334. PMID:22379238

  7. Estimation of Confined Peak Strength of Crack-Damaged Rocks

    NASA Astrophysics Data System (ADS)

    Bahrani, Navid; Kaiser, Peter K.

    2017-02-01

    It is known that the unconfined compressive strength of rock decreases with increasing density of geological features such as micro-cracks, fractures, and veins both at the laboratory specimen and rock block scales. This article deals with the confined peak strength of laboratory-scale rock specimens containing grain-scale strength dominating features such as micro-cracks. A grain-based distinct element model, whereby the rock is simulated with grains that are allowed to deform and break, is used to investigate the influence of the density of cracks on the rock strength under unconfined and confined conditions. A grain-based specimen calibrated to the unconfined and confined strengths of intact and heat-treated Wombeyan marble is used to simulate rock specimens with varying crack densities. It is demonstrated how such cracks affect the peak strength, stress-strain curve and failure mode with increasing confinement. The results of numerical simulations in terms of unconfined and confined peak strengths are used to develop semi-empirical relations that relate the difference in strength between the intact and crack-damaged rocks to the confining pressure. It is shown how these relations can be used to estimate the confined peak strength of a rock with micro-cracks when the unconfined and confined strengths of the intact rock and the unconfined strength of the crack-damaged rock are known. This approach for estimating the confined strength of crack-damaged rock specimens, called strength degradation approach, is then verified by application to published laboratory triaxial test data.

  8. Does bioelectrical impedance analysis accurately estimate the condition of threatened and endangered desert fish species?

    USGS Publications Warehouse

    Dibble, Kimberly L.; Yard, Micheal D.; Ward, David L.; Yackulic, Charles B.

    2017-01-01

    Bioelectrical impedance analysis (BIA) is a nonlethal tool with which to estimate the physiological condition of animals that has potential value in research on endangered species. However, the effectiveness of BIA varies by species, the methodology continues to be refined, and incidental mortality rates are unknown. Under laboratory conditions we tested the value of using BIA in addition to morphological measurements such as total length and wet mass to estimate proximate composition (lipid, protein, ash, water, dry mass, energy density) in the endangered Humpback Chub Gila cypha and Bonytail G. elegans and the species of concern Roundtail Chub G. robusta and conducted separate trials to estimate the mortality rates of these sensitive species. Although Humpback and Roundtail Chub exhibited no or low mortality in response to taking BIA measurements versus handling for length and wet-mass measurements, Bonytails exhibited 14% and 47% mortality in the BIA and handling experiments, respectively, indicating that survival following stress is species specific. Derived BIA measurements were included in the best models for most proximate components; however, the added value of BIA as a predictor was marginal except in the absence of accurate wet-mass data. Bioelectrical impedance analysis improved the R2 of the best percentage-based models by no more than 4% relative to models based on morphology. Simulated field conditions indicated that BIA models became increasingly better than morphometric models at estimating proximate composition as the observation error around wet-mass measurements increased. However, since the overall proportion of variance explained by percentage-based models was low and BIA was mostly a redundant predictor, we caution against the use of BIA in field applications for these sensitive fish species.

  9. Estimating turbidity current conditions from channel morphology: A Froude number approach

    NASA Astrophysics Data System (ADS)

    Sequeiros, Octavio E.

    2012-04-01

    There is a growing need across different disciplines to develop better predictive tools for flow conditions of density and turbidity currents. Apart from resorting to complex numerical modeling or expensive field measurements, little is known about how to estimate gravity flow parameters from scarce available data and how they relate to each other. This study presents a new method to estimate normal flow conditions of gravity flows from channel morphology based on an extensive data set of laboratory and field measurements. The compilation consists of 78 published works containing 1092 combined measurements of velocity and concentration of gravity flows dating as far back as the early 1950s. Because the available data do not span all ranges of the critical parameters, such as bottom slope, a validated Reynolds-averaged Navier-Stokes (RANS)κ-ɛnumerical model is used to cover the gaps. It is shown that gravity flows fall within a range of Froude numbers spanning 1 order of magnitude centered on unity, as opposed to rivers and open-channel flows which extend to a much wider range. It is also observed that the transition from subcritical to supercritical flow regime occurs around a slope of 1%, with a spread caused by parameters other than the bed slope, like friction and suspended sediment settling velocity. The method is based on a set of equations relating Froude number to bed slope, combined friction, suspended material, and other flow parameters. The applications range from quick estimations of gravity flow conditions to improved numerical modeling and back calculation of missing parameters. A real case scenario of turbidity current estimation from a submarine canyon off the Nigerian coast is provided as an example.

  10. Sensitivity of fish density estimates to standard analytical procedures applied to Great Lakes hydroacoustic data

    USGS Publications Warehouse

    Kocovsky, Patrick M.; Rudstam, Lars G.; Yule, Daniel L.; Warner, David M.; Schaner, Ted; Pientka, Bernie; Deller, John W.; Waterfield, Holly A.; Witzel, Larry D.; Sullivan, Patrick J.

    2013-01-01

    Standardized methods of data collection and analysis ensure quality and facilitate comparisons among systems. We evaluated the importance of three recommendations from the Standard Operating Procedure for hydroacoustics in the Laurentian Great Lakes (GLSOP) on density estimates of target species: noise subtraction; setting volume backscattering strength (Sv) thresholds from user-defined minimum target strength (TS) of interest (TS-based Sv threshold); and calculations of an index for multiple targets (Nv index) to identify and remove biased TS values. Eliminating noise had the predictable effect of decreasing density estimates in most lakes. Using the TS-based Sv threshold decreased fish densities in the middle and lower layers in the deepest lakes with abundant invertebrates (e.g., Mysis diluviana). Correcting for biased in situ TS increased measured density up to 86% in the shallower lakes, which had the highest fish densities. The current recommendations by the GLSOP significantly influence acoustic density estimates, but the degree of importance is lake dependent. Applying GLSOP recommendations, whether in the Laurentian Great Lakes or elsewhere, will improve our ability to compare results among lakes. We recommend further development of standards, including minimum TS and analytical cell size, for reducing the effect of biased in situ TS on density estimates.

  11. Plasma distributions in meteor head echoes and implications for radar cross section interpretation

    NASA Astrophysics Data System (ADS)

    Marshall, Robert A.; Brown, Peter; Close, Sigrid

    2017-09-01

    The derivation of meteoroid masses from radar measurements requires conversion of the measured radar cross section (RCS) to meteoroid mass. Typically, this conversion passes first through an estimate of the meteor plasma density derived from the RCS. However, the conversion from RCS to meteor plasma density requires assumptions on the radial electron density distribution. We use simultaneous triple-frequency measurements of the RCS for 63 large meteor head echoes to derive estimates of the meteor plasma size and density using five different possible radial electron density distributions. By fitting these distributions to the observed meteor RCS values and estimating the goodness-of-fit, we determine that the best fit to the data is a 1 /r2 plasma distribution, i.e. the electron density decays as 1 /r2 from the center of the meteor plasma. Next, we use the derived plasma distributions to estimate the electron line density q for each meteor using each of the five distributions. We show that depending on the choice of distribution, the line density can vary by a factor of three or more. We thus argue that a best estimate for the radial plasma distribution in a meteor head echo is necessary in order to have any confidence in derived meteoroid masses.

  12. NRLMSISE-00 Empirical Model of the Atmosphere: Statistical Comparisons and Scientific Issues

    NASA Technical Reports Server (NTRS)

    Aikin, A. C.; Picone, J. M.; Hedin, A. E.; Drob, D. P.

    2001-01-01

    The new NRLMSISE-00 model and the associated NRLMSIS database now include the following data: (1) total mass density from satellite accelerometers and from orbit determination, including the Jacchia and Barlier data; (2) temperature from incoherent scatter radar, and; (3) molecular oxygen number density, [O2], from solar ultraviolet occultation aboard the Solar Maximum Mission (SMM). A new component, 'anomalous oxygen,' allows for appreciable O(+) and hot atomic oxygen contributions to the total mass density at high altitudes and applies primarily to drag estimation above 500 km. Extensive tables compare our entire database to the NRLMSISE-00, MSISE-90, and Jacchia-70 models for different altitude bands and levels of geomagnetic activity. We also investigate scientific issues related to the new data sets in the NRLMSIS database. Especially noteworthy is the solar activity dependence of the Jacchia data, with which we investigate a large O(+) contribution to the total mass density under the combination of summer, low solar activity, high latitudes, and high altitudes. Under these conditions, except at very low solar activity, the Jacchia data and the Jacchia-70 model indeed show a significantly higher total mass density than does MSISE-90. However, under the corresponding winter conditions, the MSIS-class models represent a noticeable improvement relative to Jacchia-70 over a wide range of F(sub 10.7). Considering the two regimes together, NRLMSISE-00 achieves an improvement over both MSISE-90 and Jacchia-70 by incorporating advantages of each.

  13. Estimating the densities of benzene-derived explosives using atomic volumes.

    PubMed

    Ghule, Vikas D; Nirwan, Ayushi; Devi, Alka

    2018-02-09

    The application of average atomic volumes to predict the crystal densities of benzene-derived energetic compounds of general formula C a H b N c O d is presented, along with the reliability of this method. The densities of 119 neutral nitrobenzenes, energetic salts, and cocrystals with diverse compositions were estimated and compared with experimental data. Of the 74 nitrobenzenes for which direct comparisons could be made, the % error in the estimated density was within 0-3% for 54 compounds, 3-5% for 12 compounds, and 5-8% for the remaining 8 compounds. Among 45 energetic salts and cocrystals, the % error in the estimated density was within 0-3% for 25 compounds, 3-5% for 13 compounds, and 5-7.4% for 7 compounds. The absolute error surpassed 0.05 g/cm 3 for 27 of the 119 compounds (22%). The largest errors occurred for compounds containing fused rings and for compounds with three -NH 2 or -OH groups. Overall, the present approach for estimating the densities of benzene-derived explosives with different functional groups was found to be reliable. Graphical abstract Application and reliability of average atom volume in the crystal density prediction of energetic compounds containing benzene ring.

  14. Effect of 9. 6-GHz pulsed microwaves on the orb web spinning ability of the cross spider (Araneus diadematus)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liddle, C.G.; Putnam, J.P.; Lewter, O.L.

    1986-01-01

    Eight cross spiders (Araneus diadematus) were exposed overnight (16 h) during web-building activity to pulsed 9.6-GHz microwaves at average power densities of 10, 1, and 0.1 mW/sq. cm. (estimated SARs 40, 4, and 0.4 mW/g). Under these conditions, 9.6-GHz pulsed microwaves did not affect the web-spinning ability of the cross spider.

  15. Determination of Foraging Thresholds and Effects of Application on Energetic Carrying Capacity for Waterfowl

    PubMed Central

    2015-01-01

    Energetic carrying capacity of habitats for wildlife is a fundamental concept used to better understand population ecology and prioritize conservation efforts. However, carrying capacity can be difficult to estimate accurately and simplified models often depend on many assumptions and few estimated parameters. We demonstrate the complex nature of parameterizing energetic carrying capacity models and use an experimental approach to describe a necessary parameter, a foraging threshold (i.e., density of food at which animals no longer can efficiently forage and acquire energy), for a guild of migratory birds. We created foraging patches with different fixed prey densities and monitored the numerical and behavioral responses of waterfowl (Anatidae) and depletion of foods during winter. Dabbling ducks (Anatini) fed extensively in plots and all initial densities of supplemented seed were rapidly reduced to 10 kg/ha and other natural seeds and tubers combined to 170 kg/ha, despite different starting densities. However, ducks did not abandon or stop foraging in wetlands when seed reduction ceased approximately two weeks into the winter-long experiment nor did they consistently distribute according to ideal-free predictions during this period. Dabbling duck use of experimental plots was not related to initial seed density, and residual seed and tuber densities varied among plant taxa and wetlands but not plots. Herein, we reached several conclusions: 1) foraging effort and numerical responses of dabbling ducks in winter were likely influenced by factors other than total food densities (e.g., predation risk, opportunity costs, forager condition), 2) foraging thresholds may vary among foraging locations, and 3) the numerical response of dabbling ducks may be an inconsistent predictor of habitat quality relative to seed and tuber density. We describe implications on habitat conservation objectives of using different foraging thresholds in energetic carrying capacity models and suggest scientists reevaluate assumptions of these models used to guide habitat conservation. PMID:25790255

  16. Study of the cell activity in three-dimensional cell culture by using Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Arunngam, Pakajiraporn; Mahardika, Anggara; Hiroko, Matsuyoshi; Andriana, Bibin Bintang; Tabata, Yasuhiko; Sato, Hidetoshi

    2018-02-01

    The purpose of this study is to develop a estimation technique of local cell activity in cultured 3D cell aggregate with gelatin hydrogel microspheres by using Raman spectroscopy. It is an invaluable technique allowing real-time, nondestructive, and invasive measurement. Cells in body generally exist in 3D structure, which physiological cell-cell interaction enhances cell survival and biological functions. Although a 3D cell aggregate is a good model of the cells in living tissues, it was difficult to estimate their physiological conditions because there is no effective technique to make observation of intact cells in the 3D structure. In this study, cell aggregates were formed by MC3T-E1 (pre-osteoblast) cells and gelatin hydrogel microspheres. In appropriate condition MC3T-E1 cells can differentiate into osteoblast. We assume that the activity of the cell would be different according to the location in the aggregate because the cells near the surface of the aggregate have more access to oxygen and nutrient. Raman imaging technique was applied to measure 3D image of the aggregate. The concentration of the hydroxyapatite (HA) is generated by osteoblast was estimated with a strong band at 950-970 cm-1 which assigned to PO43- in HA. It reflects an activity of the specific site in the cell aggregate. The cell density in this specific site was analyzed by multivariate analysis of the 3D Raman image. Hence, the ratio between intensity and cell density in the site represents the cell activity.

  17. AC Loss Analysis of MgB2-Based Fully Superconducting Machines

    NASA Astrophysics Data System (ADS)

    Feddersen, M.; Haran, K. S.; Berg, F.

    2017-12-01

    Superconducting electric machines have shown potential for significant increase in power density, making them attractive for size and weight sensitive applications such as offshore wind generation, marine propulsion, and hybrid-electric aircraft propulsion. Superconductors exhibit no loss under dc conditions, though ac current and field produce considerable losses due to hysteresis, eddy currents, and coupling mechanisms. For this reason, many present machines are designed to be partially superconducting, meaning that the dc field components are superconducting while the ac armature coils are conventional conductors. Fully superconducting designs can provide increases in power density with significantly higher armature current; however, a good estimate of ac losses is required to determine the feasibility under the machines intended operating conditions. This paper aims to characterize the expected losses in a fully superconducting machine targeted towards aircraft, based on an actively-shielded, partially superconducting machine from prior work. Various factors are examined such as magnet strength, operating frequency, and machine load to produce a model for the loss in the superconducting components of the machine. This model is then used to optimize the design of the machine for minimal ac loss while maximizing power density. Important observations from the study are discussed.

  18. An adaptive technique for estimating the atmospheric density profile during the AE mission

    NASA Technical Reports Server (NTRS)

    Argentiero, P.

    1973-01-01

    A technique is presented for processing accelerometer data obtained during the AE missions in order to estimate the atmospheric density profile. A minimum variance, adaptive filter is utilized. The trajectory of the probe and probe parameters are in a consider mode where their estimates are unimproved but their associated uncertainties are permitted an impact on filter behavior. Simulations indicate that the technique is effective in estimating a density profile to within a few percentage points.

  19. Quantitative in vivo receptor binding. I. Theory and application to the muscarinic cholinergic receptor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frey, K.A.; Ehrenkaufer, R.L.; Beaucage, S.

    1985-02-01

    A novel approach to in vivo receptor binding experiments is presented which allows direct quantitation of binding site densities. The method is based on an equilibrium model of tracer uptake and is designed to produce a static distribution proportional to receptor density and to minimize possible confounding influences of regional blood flow, blood-brain barrier permeability, and nonspecific binding. This technique was applied to the measurement of regional muscarinic cholinergic receptor densities in rat brain using (/sup 3/H)scopolamine. Specific in vivo binding of scopolamine demonstrated saturability, a pharmacologic profile, and regional densities which are consistent with interaction of the tracer withmore » the muscarinic receptor. Estimates of receptor density obtained with the in vivo method and in vitro measurements in homogenates were highly correlated. Furthermore, reduction in striatal muscarinic receptors following ibotenic acid lesions resulted in a significant decrease in tracer uptake in vivo, indicating that the correlation between scopolamine distribution and receptor density may be used to demonstrate pathologic conditions. We propose that the general method presented here is directly applicable to investigation of high affinity binding sites for a variety of radioligands.« less

  20. Evaluation of Statistical Methodologies Used in U. S. Army Ordnance and Explosive Work

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ostrouchov, G

    2000-02-14

    Oak Ridge National Laboratory was tasked by the U.S. Army Engineering and Support Center (Huntsville, AL) to evaluate the mathematical basis of existing software tools used to assist the Army with the characterization of sites potentially contaminated with unexploded ordnance (UXO). These software tools are collectively known as SiteStats/GridStats. The first purpose of the software is to guide sampling of underground anomalies to estimate a site's UXO density. The second purpose is to delineate areas of homogeneous UXO density that can be used in the formulation of response actions. It was found that SiteStats/GridStats does adequately guide the sampling somore » that the UXO density estimator for a sector is unbiased. However, the software's techniques for delineation of homogeneous areas perform less well than visual inspection, which is frequently used to override the software in the overall sectorization methodology. The main problems with the software lie in the criteria used to detect nonhomogeneity and those used to recommend the number of homogeneous subareas. SiteStats/GridStats is not a decision-making tool in the classical sense. Although it does provide information to decision makers, it does not require a decision based on that information. SiteStats/GridStats provides information that is supplemented by visual inspections, land-use plans, and risk estimates prior to making any decisions. Although the sector UXO density estimator is unbiased regardless of UXO density variation within a sector, its variability increases with increased sector density variation. For this reason, the current practice of visual inspection of individual sampled grid densities (as provided by Site-Stats/GridStats) is necessary to ensure approximate homogeneity, particularly at sites with medium to high UXO density. Together with Site-Stats/GridStats override capabilities, this provides a sufficient mechanism for homogeneous sectorization and thus yields representative UXO density estimates. Objections raised by various parties to the use of a numerical ''discriminator'' in SiteStats/GridStats were likely because of the fact that the concerned statistical technique is customarily applied for a different purpose and because of poor documentation. The ''discriminator'', in Site-Stats/GridStats is a ''tuning parameter'' for the sampling process, and it affects the precision of the grid density estimates through changes in required sample size. It is recommended that sector characterization in terms of a map showing contour lines of constant UXO density with an expressed uncertainty or confidence level is a better basis for remediation decisions than a sector UXO density point estimate. A number of spatial density estimation techniques could be adapted to the UXO density estimation problem.« less

  1. Mass-loss rates, ionization fractions, shock velocities, and magnetic fields of stellar jets

    NASA Technical Reports Server (NTRS)

    Hartigan, Patrick; Morse, Jon A.; Raymond, John

    1994-01-01

    In this paper we calculate emission-line ratios from a series of planar radiative shock models that cover a wide range of shock velocities, preshock densities, and magnetic fields. The models cover the initial conditions relevant to stellar jets, and we show how to estimate the ionization fractions and shock velocities in jets directly from observations of the strong emission lines in these flows. The ionization fractions in the HH 34, HH 47, and HH 111 jets are approximately 2%, considerably smaller than previous estimates, and the shock velocities are approximately 30 km/s. For each jet the ionization fractions were found from five different line ratios, and the estimates agree to within a factor of approximately 2. The scatter in the estimates of the shock velocities is also small (+/- 4 km/s). The low ionization fractions of stellar jets imply that the observed electron densities are much lower than the total densities, so the mass-loss rates in these flows are correspondingly higher (approximately greater than 2 x 10(exp -7) solar mass/yr). The mass-loss rates in jets are a significant fraction (1%-10%) of the disk accretion rates onto young stellar objects that drive the outflows. The momentum and energy supplied by the visible portion of a typical stellar jet are sufficient to drive a weak molecular outflow. Magnetic fields in stellar jets are difficult to measure because the line ratios from a radiative shock with a magnetic field resemble those of a lower velocity shock without a field. The observed line fluxes can in principle indicate the strength of the field if the geometry of the shocks in the jet is well known.

  2. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  3. A study of the physics and chemistry of TMC-1

    NASA Technical Reports Server (NTRS)

    Pratap, P.; Dickens, J. E.; Snell, R. L.; Miralles, M. P.; Bergin, E. A.; Irvine, W. M.; Schloerb, F. P.

    1997-01-01

    We present a comprehensive study of the physical and chemical conditions along the TMC-1 ridge. Temperatures were estimated from observations of CH3CCH, NH3, and CO. Densities were obtained from a multitransition study of HC3N. The values of the density and temperature allow column densities for 13 molecular species to be estimated from statistical equilibrium calculations, using observations of rarer isotopomers where possible, to minimize opacity effects. The most striking abundance variations relative to HCO+ along the ridge were seen for HC3N, CH3CCH, and SO, while smaller variations were seen in CS, C2H, and HCN. On the other hand, the NH3, HNC, and N2H+ abundances relative to HCO+ were determined to be constant, indicating that the so-called NH3 peak in TMC-1 is probably a peak in the ammonia column density rather than a relative abundance peak. In contrast, the well-studied cyanopolyyne peak is most likely due to an enhancement in the abundance of long-chain carbon species. Comparisons of the derived abundances to the results of time-dependent chemical models show good overall agreement for chemical timescales around 10(5) yr. We find that the observed abundance gradients can be explained either by a small variation in the chemical timescale from 1.2 x 10(5) to 1.8 x 10(5) yr or by a factor of 2 change in the density along the ridge. Alternatively, a variation in the C/O ratio from 0.4 to 0.5 along the ridge produces an abundance gradient similar to that observed.

  4. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  5. Dual Approach To Superquantile Estimation And Applications To Density Fitting

    DTIC Science & Technology

    2016-06-01

    incorporate additional constraints to improve the fidelity of density estimates in tail regions. We limit our investigation to data with heavy tails, where...samples of various heavy -tailed distributions. 14. SUBJECT TERMS probability density estimation, epi-splines, optimization, risk quantification...limit our investigation to data with heavy tails, where risk quantification is typically the most difficult. Demonstrations are provided in the form of

  6. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  7. Density estimation using the trapping web design: A geometric analysis

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    1994-01-01

    Population densities for small mammal and arthropod populations can be estimated using capture frequencies for a web of traps. A conceptually simple geometric analysis that avoid the need to estimate a point on a density function is proposed. This analysis incorporates data from the outermost rings of traps, explaining large capture frequencies in these rings rather than truncating them from the analysis.

  8. Computer simulation of supersonic rarefied gas flow in the transition region, about a spherical probe; a Monte Carlo approach with application to rocket-borne ion probe experiments

    NASA Technical Reports Server (NTRS)

    Horton, B. E.; Bowhill, S. A.

    1971-01-01

    This report describes a Monte Carlo simulation of transition flow around a sphere. Conditions for the simulation correspond to neutral monatomic molecules at two altitudes (70 and 75 km) in the D region of the ionosphere. Results are presented in the form of density contours, velocity vector plots and density, velocity and temperature profiles for the two altitudes. Contours and density profiles are related to independent Monte Carlo and experimental studies, and drag coefficients are calculated and compared with available experimental data. The small computer used is a PDP-15 with 16 K of core, and a typical run for 75 km requires five iterations, each taking five hours. The results are recorded on DECTAPE to be printed when required, and the program provides error estimates for any flow field parameter.

  9. MODIS Based Estimation of Forest Aboveground Biomass in China.

    PubMed

    Yin, Guodong; Zhang, Yuan; Sun, Yan; Wang, Tao; Zeng, Zhenzhong; Piao, Shilong

    2015-01-01

    Accurate estimation of forest biomass C stock is essential to understand carbon cycles. However, current estimates of Chinese forest biomass are mostly based on inventory-based timber volumes and empirical conversion factors at the provincial scale, which could introduce large uncertainties in forest biomass estimation. Here we provide a data-driven estimate of Chinese forest aboveground biomass from 2001 to 2013 at a spatial resolution of 1 km by integrating a recently reviewed plot-level ground-measured forest aboveground biomass database with geospatial information from 1-km Moderate-Resolution Imaging Spectroradiometer (MODIS) dataset in a machine learning algorithm (the model tree ensemble, MTE). We show that Chinese forest aboveground biomass is 8.56 Pg C, which is mainly contributed by evergreen needle-leaf forests and deciduous broadleaf forests. The mean forest aboveground biomass density is 56.1 Mg C ha-1, with high values observed in temperate humid regions. The responses of forest aboveground biomass density to mean annual temperature are closely tied to water conditions; that is, negative responses dominate regions with mean annual precipitation less than 1300 mm y-1 and positive responses prevail in regions with mean annual precipitation higher than 2800 mm y-1. During the 2000s, the forests in China sequestered C by 61.9 Tg C y-1, and this C sink is mainly distributed in north China and may be attributed to warming climate, rising CO2 concentration, N deposition, and growth of young forests.

  10. MODIS Based Estimation of Forest Aboveground Biomass in China

    PubMed Central

    Sun, Yan; Wang, Tao; Zeng, Zhenzhong; Piao, Shilong

    2015-01-01

    Accurate estimation of forest biomass C stock is essential to understand carbon cycles. However, current estimates of Chinese forest biomass are mostly based on inventory-based timber volumes and empirical conversion factors at the provincial scale, which could introduce large uncertainties in forest biomass estimation. Here we provide a data-driven estimate of Chinese forest aboveground biomass from 2001 to 2013 at a spatial resolution of 1 km by integrating a recently reviewed plot-level ground-measured forest aboveground biomass database with geospatial information from 1-km Moderate-Resolution Imaging Spectroradiometer (MODIS) dataset in a machine learning algorithm (the model tree ensemble, MTE). We show that Chinese forest aboveground biomass is 8.56 Pg C, which is mainly contributed by evergreen needle-leaf forests and deciduous broadleaf forests. The mean forest aboveground biomass density is 56.1 Mg C ha−1, with high values observed in temperate humid regions. The responses of forest aboveground biomass density to mean annual temperature are closely tied to water conditions; that is, negative responses dominate regions with mean annual precipitation less than 1300 mm y−1 and positive responses prevail in regions with mean annual precipitation higher than 2800 mm y−1. During the 2000s, the forests in China sequestered C by 61.9 Tg C y−1, and this C sink is mainly distributed in north China and may be attributed to warming climate, rising CO2 concentration, N deposition, and growth of young forests. PMID:26115195

  11. Estimation of tiger densities in India using photographic captures and recaptures

    USGS Publications Warehouse

    Karanth, U.; Nichols, J.D.

    1998-01-01

    Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.

  12. The "Tracked Roaming Transect" and distance sampling methods increase the efficiency of underwater visual censuses.

    PubMed

    Irigoyen, Alejo J; Rojo, Irene; Calò, Antonio; Trobbiani, Gastón; Sánchez-Carnero, Noela; García-Charton, José A

    2018-01-01

    Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer's experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1-31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC.

  13. The "Tracked Roaming Transect" and distance sampling methods increase the efficiency of underwater visual censuses

    PubMed Central

    2018-01-01

    Underwater visual census (UVC) is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT), designed to maximize transect length (and thus the surveyed area) with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST). Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively) and the Distance Sampling (DS) method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together) and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer’s experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual total abundance with those estimated by divers using FW3, FW10, and DS estimators. Density estimates differed by 13% (range 0.1–31%) from the actual values (average = 13.09%; median = 14.16%). Based on our results we encourage the use of the Tracked Roaming Transect with Distance Sampling (TRT+DS) method for improving density estimates of species occurring at low densities and/or highly aggregated, as well as for exploratory rapid-assessment surveys in which divers could gather spatial ecological and ecosystem information on large areas during UVC. PMID:29324887

  14. Estimating large carnivore populations at global scale based on spatial predictions of density and distribution – Application to the jaguar (Panthera onca)

    PubMed Central

    Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard

    2018-01-01

    Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129

  15. In vivo measurements of the influence of the skin on cerebral oxygenation changes measured with near-infrared spectrophotometry (NIRS)

    NASA Astrophysics Data System (ADS)

    Klaessens, John H. G. M.; van Os, Sandra H. G.; Hopman, Jeroen C. W.; Liem, K. D.; van de Bor, Margot; Thijssen, Johan M.

    2004-07-01

    Goal: To investigate the influence of skin on the accuracy and precision of regional cerebral oxygenation measurements using CW-NIRS and to reduce the inter individual variability of NIRS measurements by normalization with data from an extra wavelength. Method: Three piglets (7.8-9.3 kg) were anesthetized, paralyzed and mechanically ventilated. Receiving optodes were placed over the left and right hemisphere (C3, C4 EEG placement code) and one emitting optode on Cz position (optode distance=1.8cm). Optical densities (OD) were measured for 3 wavelengths (767, 850, 905 nm) (OXYMON) during stable normoxic, mild and deep hypoxemic conditions (SaO2=100%, 80% and 60%) of one minute in each region. This was repeated 3 times: all optodes with skin (condition 1); one receiving optode directly on the skull (2); emitting and also receiving optode on the skull (3). The absolute cO2Hb, cHHb, ctHb concentrations (μmol/L) were calculated from the OD's and changes with respect to the SaO2=100% condition were estimated. Because ODs varied over a large range, the light intensity was externally attenuated to adapt to the range of the spectrophotometer. The data were then corrected for these attenuation effects and for pathlength changes caused by skin removal using the OD at the independent wavelength (λ=975nm). Results: Removal of the skin resulted in an increase of the absorption values (average 0.25 OD in condition 2 and 0.42 OD in condition 3 with respect to condition 1). The change from normoxic to medium, and to deep hypoxic conditions produced a decrease of cO2Hb (-15, and -29 μmol/L, respectively), an increase in cHHb (+16, and +35 μmol/L) and in ctHb (+1, and +5 μmol/L). Total skin removal yielded an extra change in cO2Hb (-5, -1 μmol/L), cHHb (+8, +9 μmol/L), and ctHb (+3, +8 μmol /L). The coefficient of variability of the absolute concentration changes was considerably decreased by the normalization of densities by the density obtained at 795 nm. Conclusion: Skin and subcutaneous layers influence the regional oxygenation measurements but the estimated concentration changes are dominated by changes of the oxygenation levels in the brain. Inter individual variability can be considerably reduced by the normalization.

  16. Analytical monitoring of soil bioengineering structures in the Tuscan Emilian Apennines of Italy

    NASA Astrophysics Data System (ADS)

    Selli, Lavinia; Guastini, Enrico

    2014-05-01

    Soil bioengineering has been an appropriate solution to deal with erosion problems and shallow landslides in the North Apennines, Italy. The objective of our research was a check about critical aspects of soil bioengineering works. We monitored the works that have been carried out in the Tuscan Emilian Apennines by testing the suitability of different plant species and analyzed in detail timber structures of wooden crib walls. Plant species were mainly Salix alba and Salix purpurea that gave good sprouting and survival rates. However, showed some issues in growing on dry and sunny Apennine lands, where other shrubs like Spanish Broom, blackthorn, cornel-tree and Eglantine would be more indicated. The localized analysis on wooden elements has been led gathering parts from the poles and obtaining samples in order to determine their density. The hypothetical initial density of the wood used in the structure has been estimated, then calculating the residual density. This analysis allows us to determine the general condition of the wood, highlighting the structures in worst condition (the one in Pianaccio show a residual density close to 70%, instead of 90% as found on other structures) and those whose degraded wood has undergone the greatest damage (Pianaccio here too, with 50%, followed by Campoferrario - 60% - and by Pian di Favale with 85%, a rather good value for the most degraded wood in the structure).

  17. Observer variability in estimating numbers: An experiment

    USGS Publications Warehouse

    Erwin, R.M.

    1982-01-01

    Census estimates of bird populations provide an essential framework for a host of research and management questions. However, with some exceptions, the reliability of numerical estimates and the factors influencing them have received insufficient attention. Independent of the problems associated with habitat type, weather conditions, cryptic coloration, ete., estimates may vary widely due only to intrinsic differences in observers? abilities to estimate numbers. Lessons learned in the field of perceptual psychology may be usefully applied to 'real world' problems in field ornithology. Based largely on dot discrimination tests in the laboratory, it was found that numerical abundance, density of objects, spatial configuration, color, background, and other variables influence individual accuracy in estimating numbers. The primary purpose of the present experiment was to assess the effects of observer, prior experience, and numerical range on accuracy in estimating numbers of waterfowl from black-and-white photographs. By using photographs of animals rather than black dots, I felt the results could be applied more meaningfully to field situations. Further, reinforcement was provided throughout some experiments to examine the influence of training on accuracy.

  18. Thermospheric density and satellite drag modeling

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush Mukesh

    The United States depends heavily on its space infrastructure for a vast number of commercial and military applications. Space Situational Awareness (SSA) and Threat Assessment require maintaining accurate knowledge of the orbits of resident space objects (RSOs) and the associated uncertainties. Atmospheric drag is the largest source of uncertainty for low-perigee RSOs. The uncertainty stems from inaccurate modeling of neutral atmospheric mass density and inaccurate modeling of the interaction between the atmosphere and the RSO. In order to reduce the uncertainty in drag modeling, both atmospheric density and drag coefficient (CD) models need to be improved. Early atmospheric density models were developed from orbital drag data or observations of a few early compact satellites. To simplify calculations, densities derived from orbit data used a fixed CD value of 2.2 measured in a laboratory using clean surfaces. Measurements from pressure gauges obtained in the early 1990s have confirmed the adsorption of atomic oxygen on satellite surfaces. The varying levels of adsorbed oxygen along with the constantly changing atmospheric conditions cause large variations in CD with altitude and along the orbit of the satellite. Therefore, the use of a fixed CD in early development has resulted in large biases in atmospheric density models. A technique for generating corrections to empirical density models using precision orbit ephemerides (POE) as measurements in an optimal orbit determination process was recently developed. The process generates simultaneous corrections to the atmospheric density and ballistic coefficient (BC) by modeling the corrections as statistical exponentially decaying Gauss-Markov processes. The technique has been successfully implemented in generating density corrections using the CHAMP and GRACE satellites. This work examines the effectiveness, specifically the transfer of density models errors into BC estimates, of the technique using the CHAMP and GRACE satellites. Moving toward accurate atmospheric models and absolute densities requires physics based models for CD. Closed-form solutions of CD have been developed and exist for a handful of simple geometries (flat plate, sphere, and cylinder). However, for complex geometries, the Direct Simulation Monte Carlo (DSMC) method is an important tool for developing CD models. DSMC is computationally intensive and real-time simulations for CD are not feasible. Therefore, parameterized models for CD are required. Modeling CD for an RSO requires knowledge of the gas-surface interaction (GSI) that defines the manner in which the atmospheric particles exchange momentum and energy with the surface. The momentum and energy exchange is further influenced by likely adsorption of atomic oxygen that may partially or completely cover the surface. An important parameter that characterizes the GSI is the energy accommodation coefficient, α. An innovative and state-of-the-art technique of developing parameterized drag coefficient models is presented and validated using the GRACE satellite. The effect of gas-surface interactions on physical drag coefficients is examined. An attempt to reveal the nature of gas-surface interactions at altitudes above 500 km is made using the STELLA satellite. A model that can accurately estimate CD has the potential to: (i) reduce the sources of uncertainty in the drag model, (ii) improve density estimates by resolving time-varying biases and moving toward absolute densities, and (iii) increase data sources for density estimation by allowing for the use of a wide range of RSOs as information sources. Results from this work have the potential to significantly improve the accuracy of conjunction analysis and SSA.

  19. A robust method for estimating motorbike count based on visual information learning

    NASA Astrophysics Data System (ADS)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  20. DS — Software for analyzing data collected using double sampling

    USGS Publications Warehouse

    Bart, Jonathan; Hartley, Dana

    2011-01-01

    DS analyzes count data to estimate density or relative density and population size when appropriate. The software is available at http://iwcbm.dev4.fsr.com/IWCBM/default.asp?PageID=126. The software was designed to analyze data collected using double sampling, but it also can be used to analyze index data. DS is not currently configured to apply distance methods or methods based on capture-recapture theory. Double sampling for the purpose of this report means surveying a sample of locations with a rapid method of unknown accuracy and surveying a subset of these locations using a more intensive method assumed to yield unbiased estimates. "Detection ratios" are calculated as the ratio of results from rapid surveys on intensive plots to the number actually present as determined from the intensive surveys. The detection ratios are used to adjust results from the rapid surveys. The formula for density is (results from rapid survey)/(estimated detection ratio from intensive surveys). Population sizes are estimated as (density)(area). Double sampling is well-established in the survey sampling literature—see Cochran (1977) for the basic theory, Smith (1995) for applications of double sampling in waterfowl surveys, Bart and Earnst (2002, 2005) for discussions of its use in wildlife studies, and Bart and others (in press) for a detailed account of how the method was used to survey shorebirds across the arctic region of North America. Indices are surveys that do not involve complete counts of well-defined plots or recording information to estimate detection rates (Thompson and others, 1998). In most cases, such data should not be used to estimate density or population size but, under some circumstances, may be used to compare two densities or estimate how density changes through time or across space (Williams and others, 2005). The Breeding Bird Survey (Sauer and others, 2008) provides a good example of an index survey. Surveyors record all birds detected but do not record any information, such as distance or whether each bird is recorded in subperiods, that could be used to estimate detection rates. Nonetheless, the data are widely used to estimate temporal trends and spatial patterns in abundance (Sauer and others, 2008). DS produces estimates of density (or relative density for indices) by species and stratum. Strata are usually defined using region and habitat but other variables may be used, and the entire study area may be classified as a single stratum. Population size in each stratum and for the entire study area also is estimated for each species. For indices, the estimated totals generally are only useful if (a) plots are surveyed so that densities can be calculated and extrapolated to the entire study area and (b) if the detection rates are close to 1.0. All estimates are accompanied by standard errors (SE) and coefficients of variation (CV, that is, SE/estimate).

  1. Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data.

    PubMed

    Nakamura, Yoshihiro; Hasegawa, Osamu

    2017-01-01

    With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.

  2. Water quality assessment by means of HFNI valvometry and high-frequency data modeling.

    PubMed

    Sow, Mohamedou; Durrieu, Gilles; Briollais, Laurent; Ciret, Pierre; Massabuau, Jean-Charles

    2011-11-01

    The high-frequency measurements of valve activity in bivalves (e.g., valvometry) over a long period of time and in various environmental conditions allow a very accurate study of their behaviors as well as a global analysis of possible perturbations due to the environment. Valvometry uses the bivalve's ability to close its shell when exposed to a contaminant or other abnormal environmental conditions as an alarm to indicate possible perturbations in the environment. The modeling of such high-frequency serial valvometry data is statistically challenging, and here, a nonparametric approach based on kernel estimation is proposed. This method has the advantage of summarizing complex data into a simple density profile obtained from each animal at every 24-h period to ultimately make inference about time effect and external conditions on this profile. The statistical properties of the estimator are presented. Through an application to a sample of 16 oysters living in the Bay of Arcachon (France), we demonstrate that this method can be used to first estimate the normal biological rhythms of permanently immersed oysters and second to detect perturbations of these rhythms due to changes in their environment. We anticipate that this approach could have an important contribution to the survey of aquatic systems.

  3. A theory of stationarity and asymptotic approach in dissipative systems

    NASA Astrophysics Data System (ADS)

    Rubel, Michael Thomas

    2007-05-01

    The approximate dynamics of many physical phenomena, including turbulence, can be represented by dissipative systems of ordinary differential equations. One often turns to numerical integration to solve them. There is an incompatibility, however, between the answers it can produce (i.e., specific solution trajectories) and the questions one might wish to ask (e.g., what behavior would be typical in the laboratory?) To determine its outcome, numerical integration requires more detailed initial conditions than a laboratory could normally provide. In place of initial conditions, experiments stipulate how tests should be carried out: only under statistically stationary conditions, for example, or only during asymptotic approach to a final state. Stipulations such as these, rather than initial conditions, are what determine outcomes in the laboratory.This theoretical study examines whether the points of view can be reconciled: What is the relationship between one's statistical stipulations for how an experiment should be carried out--stationarity or asymptotic approach--and the expected results? How might those results be determined without invoking initial conditions explicitly?To answer these questions, stationarity and asymptotic approach conditions are analyzed in detail. Each condition is treated as a statistical constraint on the system--a restriction on the probability density of states that might be occupied when measurements take place. For stationarity, this reasoning leads to a singular, invariant probability density which is already familiar from dynamical systems theory. For asymptotic approach, it leads to a new, more regular probability density field. A conjecture regarding what appears to be a limit relationship between the two densities is presented.By making use of the new probability densities, one can derive output statistics directly, avoiding the need to create or manipulate initial data, and thereby avoiding the conceptual incompatibility mentioned above. This approach also provides a clean way to derive reduced-order models, complete with local and global error estimates, as well as a way to compare existing reduced-order models objectively.The new approach is explored in the context of five separate test problems: a trivial one-dimensional linear system, a damped unforced linear oscillator in two dimensions, the isothermal Rayleigh-Plesset equation, Lorenz's equations, and the Stokes limit of Burgers' equation in one space dimension. In each case, various output statistics are deduced without recourse to initial conditions. Further, reduced-order models are constructed for asymptotic approach of the damped unforced linear oscillator, the isothermal Rayleigh-Plesset system, and Lorenz's equations, and for stationarity of Lorenz's equations.

  4. Camera traps and activity signs to estimate wild boar density and derive abundance indices.

    PubMed

    Massei, Giovanna; Coats, Julia; Lambert, Mark Simon; Pietravalle, Stephane; Gill, Robin; Cowan, Dave

    2018-04-01

    Populations of wild boar and feral pigs are increasing worldwide, in parallel with their significant environmental and economic impact. Reliable methods of monitoring trends and estimating abundance are needed to measure the effects of interventions on population size. The main aims of this study, carried out in five English woodlands were: (i) to compare wild boar abundance indices obtained from camera trap surveys and from activity signs; and (ii) to assess the precision of density estimates in relation to different densities of camera traps. For each woodland, we calculated a passive activity index (PAI) based on camera trap surveys, rooting activity and wild boar trails on transects, and estimated absolute densities based on camera trap surveys. PAIs obtained using different methods showed similar patterns. We found significant between-year differences in abundance of wild boar using PAIs based on camera trap surveys and on trails on transects, but not on signs of rooting on transects. The density of wild boar from camera trap surveys varied between 0.7 and 7 animals/km 2 . Increasing the density of camera traps above nine per km 2 did not increase the precision of the estimate of wild boar density. PAIs based on number of wild boar trails and on camera trap data appear to be more sensitive to changes in population size than PAIs based on signs of rooting. For wild boar densities similar to those recorded in this study, nine camera traps per km 2 are sufficient to estimate the mean density of wild boar. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry. © 2017 Crown copyright. Pest Management Science © 2017 Society of Chemical Industry.

  5. Workshop on the Detection, Classification, Localization and Density Estimation of Marine Mammals Using Passive Acoustics - 2015

    DTIC Science & Technology

    2015-09-30

    together the research community working on marine mammal acoustics to discuss detection, classification, localization and density estimation methods...and Density Estimation of Marine Mammals Using Passive Acoustics - 2015 John A. Hildebrand Scripps Institution of Oceanography UCSD La Jolla...dclde LONG-TERM GOALS The goal of this project was to bring together the community of researchers working on methods for detection

  6. Detection and quantification of solute clusters in a nanostructured ferritic alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Michael K.; Larson, David J.; Reinhard, D. A.

    2014-12-26

    A series of simulated atom probe datasets were examined with a friends-of-friends method to establish the detection efficiency required to resolve solute clusters in the ferrite phase of a 14YWT nanostructured ferritic alloy. The size and number densities of solute clusters in the ferrite of the as-milled mechanically-alloyed condition and the stir zone of a friction stir weld were estimated with a prototype high-detection-efficiency (~80%) local electrode atom probe. High number densities, 1.8 × 10 24 m –3 and 1.2 × 10 24 m –3, respectively of solute clusters containing between 2 and 9 solute atoms of Ti, Y andmore » O and were detected for these two conditions. Furthermore, these results support first principle calculations that predicted that vacancies stabilize these Ti–Y–O– clusters, which retard diffusion and contribute to the excellent high temperature stability of the microstructure and radiation tolerance of nanostructured ferritic alloys.« less

  7. Magnon condensation and spin superfluidity

    NASA Astrophysics Data System (ADS)

    Bunkov, Yury M.; Safonov, Vladimir L.

    2018-04-01

    We consider the Bose-Einstein condensation (BEC) of quasi-equilibrium magnons which leads to spin superfluidity, the coherent quantum transfer of magnetization in magnetic material. The critical conditions for excited magnon density in ferro- and antiferromagnets, bulk and thin films, are estimated and discussed. It was demonstrated that only the highly populated region of the spectrum is responsible for the emergence of any BEC. This finding substantially simplifies the BEC theoretical analysis and is surely to be used for simulations. It is shown that the conditions of magnon BEC in the perpendicular magnetized YIG thin film is fulfillied at small angle, when signals are treated as excited spin waves. We also predict that the magnon BEC should occur in the antiferromagnetic hematite at room temperature at much lower excited magnon density compared to that of ferromagnetic YIG. Bogoliubov's theory of Bose-Einstein condensate is generalized to the case of multi-particle interactions. The six-magnon repulsive interaction may be responsible for the BEC stability in ferro- and antiferromagnets where the four-magnon interaction is attractive.

  8. The stochastic spectator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardwick, Robert J.; Vennin, Vincent; Wands, David

    We study the stochastic distribution of spectator fields predicted in different slow-roll inflation backgrounds. Spectator fields have a negligible energy density during inflation but may play an important dynamical role later, even giving rise to primordial density perturbations within our observational horizon today. During de-Sitter expansion there is an equilibrium solution for the spectator field which is often used to estimate the stochastic distribution during slow-roll inflation. However slow roll only requires that the Hubble rate varies slowly compared to the Hubble time, while the time taken for the stochastic distribution to evolve to the de-Sitter equilibrium solution can bemore » much longer than a Hubble time. We study both chaotic (monomial) and plateau inflaton potentials, with quadratic, quartic and axionic spectator fields. We give an adiabaticity condition for the spectator field distribution to relax to the de-Sitter equilibrium, and find that the de-Sitter approximation is never a reliable estimate for the typical distribution at the end of inflation for a quadratic spectator during monomial inflation. The existence of an adiabatic regime at early times can erase the dependence on initial conditions of the final distribution of field values. In these cases, spectator fields acquire sub-Planckian expectation values. Otherwise spectator fields may acquire much larger field displacements than suggested by the de-Sitter equilibrium solution. We quantify the information about initial conditions that can be obtained from the final field distribution. Our results may have important consequences for the viability of spectator models for the origin of structure, such as the simplest curvaton models.« less

  9. Material properties of Pacific hake, Humboldt squid, and two species of myctophids in the California Current.

    PubMed

    Becker, Kaylyn N; Warren, Joseph D

    2015-05-01

    Material properties of the flesh from three fish species (Merluccius productus, Symbolophorus californiensis, and Diaphus theta), and several body parts of the Humboldt squid (Dosidicus gigas) collected from the California Current ecosystem were measured. The density contrast relative to seawater varied within and among taxa for fish flesh (0.9919-1.036), squid soft body parts (mantle, arms, tentacle, braincase, eyes; 1.009-1.057), and squid hard body parts (beak and pen; 1.085-1.459). Effects of animal length and environmental conditions on nekton density contrast were investigated. The sound speed contrast relative to seawater varied within and among taxa for fish flesh (0.986-1.027) and Humboldt squid mantle and braincase (0.937-1.028). Material properties in this study are similar to values from previous studies on species with similar life histories. In general, the sound speed and density of soft body parts of fish and squid were 1%-3% and 1%-6%, respectively, greater than the surrounding seawater. Hard parts of the squid were significantly more dense (6%-46%) than seawater. The material properties reported here can be used to improve target strength estimates from acoustic scattering models, which could increase the accuracy of biomass estimates from acoustic surveys for these nekton.

  10. Power density measurements to optimize AC plasma jet operation in blood coagulation.

    PubMed

    Ahmed, Kamal M; Eldeighdye, Shaimaa M; Allam, Tarek M; Hassanin, Walaa F

    2018-06-14

    In this paper, the plasma power density and corresponding plasma dose of a low-cost air non-thermal plasma jet (ANPJ) device are estimated at different axial distances from the nozzle. This estimation is achieved by measuring the voltage and current at the substrate using diagnostic techniques that can be easily made in laboratory; thin wire and dielectric probe, respectively. This device uses a compressed air as input gas instead of the relatively-expensive, large-sized and heavy weighed tanks of Ar or He gases. The calculated plasma dose is found to be very low and allows the presented device to be used in biomedical applications (especially blood coagulation). While plasma active species and charged-particles are found to be the most effective on blood coagulation formation, both air flow and UV, individually, do not have any effect. Moreover, optimal conditions for accelerating blood coagulation are studied. Results showed that, the power density at the substrate is shown to be decreased with increasing the distance from the nozzle. In addition, both distances from nozzle and air flow rate play an important role in accelerating blood coagulation process. Finally, this device is efficient, small-sized, safe enough, of low cost and, hence, has its chances to be wide spread as a first aid and in ambulance.

  11. Evidence for top-heavy stellar initial mass functions with increasing density and decreasing metallicity

    NASA Astrophysics Data System (ADS)

    Marks, Michael; Kroupa, Pavel; Dabringhausen, Jörg; Pawlowski, Marcel S.

    2012-05-01

    Residual-gas expulsion after cluster formation has recently been shown to leave an imprint in the low-mass present-day stellar mass function (PDMF) which allowed the estimation of birth conditions of some Galactic globular clusters (GCs) such as mass, radius and star formation efficiency. We show that in order to explain their characteristics (masses, radii, metallicity and PDMF) their stellar initial mass function (IMF) must have been top heavy. It is found that the IMF is required to become more top heavy the lower the cluster metallicity and the larger the pre-GC cloud-core density are. The deduced trends are in qualitative agreement with theoretical expectation. The results are consistent with estimates of the shape of the high-mass end of the IMF in the Arches cluster, Westerlund 1, R136 and NGC 3603, as well as with the IMF independently constrained for ultra-compact dwarf galaxies (UCDs). The latter suggests that GCs and UCDs might have formed along the same channel or that UCDs formed via mergers of GCs. A Fundamental Plane is found which describes the variation of the IMF with density and metallicity of the pre-GC cloud cores. The implications for the evolution of galaxies and chemical enrichment over cosmological times are expected to be major.

  12. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  13. Flocculation kinetics and aggregate structure of kaolinite mixtures in laminar tube flow.

    PubMed

    Vaezi G, Farid; Sanders, R Sean; Masliyah, Jacob H

    2011-03-01

    Flocculation is commonly used in various solid-liquid separation processes in chemical and mineral industries to separate desired products or to treat waste streams. This paper presents an experimental technique to study flocculation processes in laminar tube flow. This approach allows for more realistic estimation of the shear rate to which an aggregate is exposed, as compared to more complicated shear fields (e.g. stirred tanks). A direct sampling method is used to minimize the effect of sampling on the aggregate structure. A combination of aggregate settling velocity and image analysis was used to quantify the structure of the aggregate. Aggregate size, density, and fractal dimension were found to be the most important aggregate structural parameters. The two methods used to determine aggregate fractal dimension were in good agreement. The effects of advective flow through an aggregate's porous structure and transition-regime drag coefficient on the evaluation of aggregate density were considered. The technique was applied to investigate the flocculation kinetics and the evolution of the aggregate structure of kaolin particles with an anionic flocculant under conditions similar to those of oil sands fine tailings. Aggregates were formed using a well controlled two-stage aggregation process. Detailed statistical analysis was performed to investigate the establishment of dynamic equilibrium condition in terms of aggregate size and density evolution. An equilibrium steady state condition was obtained within 90 s of the start of flocculation; after which no further change in aggregate structure was observed. Although longer flocculation times inside the shear field could conceivably cause aggregate structure conformation, statistical analysis indicated that this did not occur for the studied conditions. The results show that the technique and experimental conditions employed here produce aggregates having a well-defined, reproducible structure. Copyright © 2011. Published by Elsevier Inc.

  14. Using spatiotemporal statistical models to estimate animal abundance and infer ecological dynamics from survey counts

    USGS Publications Warehouse

    Conn, Paul B.; Johnson, Devin S.; Ver Hoef, Jay M.; Hooten, Mevin B.; London, Joshua M.; Boveng, Peter L.

    2015-01-01

    Ecologists often fit models to survey data to estimate and explain variation in animal abundance. Such models typically require that animal density remains constant across the landscape where sampling is being conducted, a potentially problematic assumption for animals inhabiting dynamic landscapes or otherwise exhibiting considerable spatiotemporal variation in density. We review several concepts from the burgeoning literature on spatiotemporal statistical models, including the nature of the temporal structure (i.e., descriptive or dynamical) and strategies for dimension reduction to promote computational tractability. We also review several features as they specifically relate to abundance estimation, including boundary conditions, population closure, choice of link function, and extrapolation of predicted relationships to unsampled areas. We then compare a suite of novel and existing spatiotemporal hierarchical models for animal count data that permit animal density to vary over space and time, including formulations motivated by resource selection and allowing for closed populations. We gauge the relative performance (bias, precision, computational demands) of alternative spatiotemporal models when confronted with simulated and real data sets from dynamic animal populations. For the latter, we analyze spotted seal (Phoca largha) counts from an aerial survey of the Bering Sea where the quantity and quality of suitable habitat (sea ice) changed dramatically while surveys were being conducted. Simulation analyses suggested that multiple types of spatiotemporal models provide reasonable inference (low positive bias, high precision) about animal abundance, but have potential for overestimating precision. Analysis of spotted seal data indicated that several model formulations, including those based on a log-Gaussian Cox process, had a tendency to overestimate abundance. By contrast, a model that included a population closure assumption and a scale prior on total abundance produced estimates that largely conformed to our a priori expectation. Although care must be taken to tailor models to match the study population and survey data available, we argue that hierarchical spatiotemporal statistical models represent a powerful way forward for estimating abundance and explaining variation in the distribution of dynamical populations.

  15. Helium as a Dynamical Tracer in the Thermosphere

    NASA Astrophysics Data System (ADS)

    Thayer, J. P.; Liu, X.; Wang, W.; Burns, A. G.

    2014-12-01

    Helium has been a missing constituent in current thermosphere general circulation models. Although typically a minor gas relative to the more abundant major gasses, its unique properties of being chemically inert and light make it an excellent tracer of thermosphere dynamics. Studying helium can help simplify understanding of transport effects. This understanding can then be projected to other gasses whose overall structure and behavior are complex but, by contrasting with helium, can be evaluated for its transport dependencies. The dynamical influences on composition impact estimates of thermosphere mass density, where helium during solar minima can have a direct contribution, as well as ionosphere electron density. Furthermore, helium estimates in the upper thermosphere during solar minima have not been observed since the 1976 minimum. Indirect estimates of helium in the upper thermosphere during the recent extreme solar minimum indicates winter-time helium concentrations exceeded NRL-MSISE00 estimates by 30%-70% during periods of quiet geomagnetic activity. For times of active geomagnetic conditions, helium concentrations near ~450 km altitude are estimated to decrease while oxygen concentrations increase. An investigation of the altitude structure in thermosphere mass density storm-time perturbations reveal the important effects of composition change with maximum perturbation occurring near the He/O transition region and a much weaker maximum occurring near the O/N2 transition region. However, evaluating helium behavior and its role as a dynamical tracer is not straightforward and model development is necessary to adequately establish the connection to specific dynamical processes. Fortunately recent efforts have led to the implementation of helium modules in the NCAR TIEGCM and TIME-GCM. In this invited talk, the simulated helium behavior and structure will be shown to reproduce observations (such as the wintertime helium bulge and storm-time response) and its utility as a dynamical tracer of thermosphere dynamics will be elucidated.

  16. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  17. [Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].

    PubMed

    Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong

    2015-11-01

    With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.

  18. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  19. Evidence of Temporal Variation of Titan Atmospheric Density in 2005-2013

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.; Lim, Ryan S.

    2013-01-01

    One major science objective of the Cassini mission is an investigation of Titan's atmosphere constituent abundances. Titan's atmospheric density is of interest not only to planetary scientists but also to mission design and mission control engineers. Knowledge of the dependency of Titan's atmospheric density with altitude is important because any unexpectedly high atmospheric density has the potential to tumble the spacecraft during a flyby. During low-altitude Titan flyby, thrusters are fired to counter the torque imparted on the spacecraft due to the Titan atmosphere. The denser the Titan's atmosphere is, the higher are the duty cycles of the thruster firings. Therefore thruster firing telemetry data could be used to estimate the atmospheric torque imparted on the spacecraft. Since the atmospheric torque imparted on the spacecraft is related to the Titan's atmospheric density, atmospheric densities are estimated accordingly. In 2005-2013, forty-three low-altitude Titan flybys were executed. The closest approach altitudes of these Titan flybys ranged from 878 to 1,074.8 km. Our density results are also compared with those reported by other investigation teams: Voyager-1 (in November 1980) and the Huygens Atmospheric Structure Instrument, HASI (in January 2005). From our results, we observe a temporal variation of the Titan atmospheric density in 2005-2013. The observed temporal variation is significant and it isn't due to the estimation uncertainty (5.8%, 1 sigma) of the density estimation methodology. Factors that contributed to this temporal variation have been conjectured but are largely unknown. The observed temporal variation will require synergetic analysis with measurements made by other Cassini science instruments and future years of laboratory and modeling efforts to solve. The estimated atmospheric density results are given in this paper help scientists to better understand and model the density structure of the Titan atmosphere.

  20. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  1. Exact and Approximate Statistical Inference for Nonlinear Regression and the Estimating Equation Approach.

    PubMed

    Demidenko, Eugene

    2017-09-01

    The exact density distribution of the nonlinear least squares estimator in the one-parameter regression model is derived in closed form and expressed through the cumulative distribution function of the standard normal variable. Several proposals to generalize this result are discussed. The exact density is extended to the estimating equation (EE) approach and the nonlinear regression with an arbitrary number of linear parameters and one intrinsically nonlinear parameter. For a very special nonlinear regression model, the derived density coincides with the distribution of the ratio of two normally distributed random variables previously obtained by Fieller (1932), unlike other approximations previously suggested by other authors. Approximations to the density of the EE estimators are discussed in the multivariate case. Numerical complications associated with the nonlinear least squares are illustrated, such as nonexistence and/or multiple solutions, as major factors contributing to poor density approximation. The nonlinear Markov-Gauss theorem is formulated based on the near exact EE density approximation.

  2. Geographical Distribution of Woody Biomass Carbon in Tropical Africa: An Updated Database for 2000 (NDP-055.2007, NDP-055b))

    DOE Data Explorer

    Gibbs, Holly K. [Center for Sustainability and the Global Environment (SAGE), University of Wisconsin, Madison, WI (USA); Brown, Sandra [Winrock International, Arlington, VA (USA); Olsen, L. M. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA); Boden, Thomas A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA)

    2007-09-01

    Maps of biomass density are critical inputs for estimating carbon emissions from deforestation and degradation of tropical forests. Brown and Gatson (1996) pioneered methods to use GIS analysis to map forest biomass based on forest inventory data (ndp055). This database is an update of ndp055 (which represent conditions in circa 1980) and accounts for land cover changes occurring up to the year 2000.

  3. Tailoring point counts for inference about avian density: dealing with nondetection and availability

    USGS Publications Warehouse

    Johnson, Fred A.; Dorazio, Robert M.; Castellón, Traci D.; Martin, Julien; Garcia, Jay O.; Nichols, James D.

    2014-01-01

    Point counts are commonly used for bird surveys, but interpretation is ambiguous unless there is an accounting for the imperfect detection of individuals. We show how repeated point counts, supplemented by observation distances, can account for two aspects of the counting process: (1) detection of birds conditional on being available for observation and (2) the availability of birds for detection given presence. We propose a hierarchical model that permits the radius in which birds are available for detection to vary with forest stand age (or other relevant habitat features), so that the number of birds available at each location is described by a Poisson-gamma mixture. Conditional on availability, the number of birds detected at each location is modeled by a beta-binomial distribution. We fit this model to repeated point count data of Florida scrub-jays and found evidence that the area in which birds were available for detection decreased with increasing stand age. Estimated density was 0.083 (95%CI: 0.060–0.113) scrub-jays/ha. Point counts of birds have a number of appealing features. Based on our findings, however, an accounting for both components of the counting process may be necessary to ensure that abundance estimates are comparable across time and space. Our approach could easily be adapted to other species and habitats.

  4. Infrared Spectra and Band Strengths of CH3SH, an Interstellar Molecule

    NASA Technical Reports Server (NTRS)

    Hudson, R. L.

    2016-01-01

    Three solid phases of CH3SH (methanethiol or methyl mercaptan) have been prepared and their mid-infrared spectra recorded at 10-110 degrees Kelvin, with an emphasis on the 17-100 degrees Kelvin region. Refractive indices have been measured at two temperatures and used to estimate ice densities and infrared band strengths. Vapor pressures for the two crystalline phases of CH3SH at 110 degrees Kelvin are estimated. The behavior of amorphous CH3SH on warming is presented and discussed in terms of Ostwald's step rule. Comparisons to CH3OH under similar conditions are made, and some inconsistencies and ambiguities in the CH3SH literature are examined and corrected.

  5. Spectroscopic results in helium from the NASA Lewis Bumpy Torus plasma. [ion heating by Penning discharge in confinement geometry

    NASA Technical Reports Server (NTRS)

    Richardson, R. W.

    1974-01-01

    Spectroscopic measurements were carried out on the NASA Lewis Bumpy Torus experiment in which a steady state ion heating method based on the modified Penning discharge is applied in a bumpy torus confinement geometry. Electron temperatures in pure helium are measured from the ratio of spectral line intensities. Measured electron temperatures range from 10 to 100 eV. Relative electron densities are also measured over the range of operating conditions. Radial profiles of temperature and relative density are measured in the two basic modes of operation of the device called the low and high pressure modes. The electron temperatures are used to estimate particle confinement times based on a steady state particle balance.

  6. LCOE Baseline for OE Buoy

    DOE Data Explorer

    Previsic, Mirko; Karthikeyan, Anantha; Lewis, Tony; McCarthy, John

    2017-07-26

    Capex numbers are in $/kW, Opex numbers in $/kW-yr. Cost Estimates provided herein are based on concept design and basic engineering data and have high levels of uncertainties embedded. This reference economic scenario was done for a very large device version of the OE Buoy technology, which is not presently on Ocean Energy's technology development pathway but will be considered in future business plan development. The DOE reference site condition is considered a low power-density site, compared with many of the planned initial deployment locations for the OE Buoy. Many of the sites considered for the initial commercial deployment of the OE buoy feature much higher wave power densities and shorter period waves. Both of these characteristics will improve the OE buoy's commercial viability.

  7. Tailoring magnetic properties of self-biased hexaferrites using an alternative copolymer of isobutylene and maleic anhydride

    NASA Astrophysics Data System (ADS)

    Wu, Chuanjian; Yu, Zhong; Sokolov, Alexander S.; Yu, Chengju; Sun, Ke; Jiang, Xiaona; Lan, Zhongwen; Harris, Vincent G.

    2018-05-01

    Discussed is a novel self-biased hexaferrite gelling system based on a nontoxic and water-soluble copolymer of isobutylene and maleic anhydride. This copolymer simultaneously acts as a dispersant and gelling agent, and recently received much attention from the ceramics community. Herein its effects on the rheological conditions throughout magnetic-field pressing, and consequently, orientation, density and magnetic properties of textured hexaferrites were investigated. Ka-band FMR linewidths were measured, and the crystalline anisotropy and porosity induced linewidth broadening were estimated according to Schlömann's theory. The copolymer allowed to reduce the friction between micron-sized magnetic particulates, resulting in higher density and degree of crystalline orientation, and lower FMR linewidth.

  8. On estimation of time-dependent attributable fraction from population-based case-control studies.

    PubMed

    Zhao, Wei; Chen, Ying Qing; Hsu, Li

    2017-09-01

    Population attributable fraction (PAF) is widely used to quantify the disease burden associated with a modifiable exposure in a population. It has been extended to a time-varying measure that provides additional information on when and how the exposure's impact varies over time for cohort studies. However, there is no estimation procedure for PAF using data that are collected from population-based case-control studies, which, because of time and cost efficiency, are commonly used for studying genetic and environmental risk factors of disease incidences. In this article, we show that time-varying PAF is identifiable from a case-control study and develop a novel estimator of PAF. Our estimator combines odds ratio estimates from logistic regression models and density estimates of the risk factor distribution conditional on failure times in cases from a kernel smoother. The proposed estimator is shown to be consistent and asymptotically normal with asymptotic variance that can be estimated empirically from the data. Simulation studies demonstrate that the proposed estimator performs well in finite sample sizes. Finally, the method is illustrated by a population-based case-control study of colorectal cancer. © 2017, The International Biometric Society.

  9. REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negrete, C. Alenka; Dultzin, Deborah; Marziani, Paola

    2013-07-01

    Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR}more » directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.« less

  10. It’s what’s inside that counts: Egg contaminant concentrations are influenced by estimates of egg density, egg volume, and fresh egg mass

    USGS Publications Warehouse

    Herzog, Mark; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Hartman, Christopher

    2016-01-01

    In egg contaminant studies, it is necessary to calculate egg contaminant concentrations on a fresh wet weight basis and this requires accurate estimates of egg density and egg volume. We show that the inclusion or exclusion of the eggshell can influence egg contaminant concentrations, and we provide estimates of egg density (both with and without the eggshell) and egg-shape coefficients (used to estimate egg volume from egg morphometrics) for American avocet (Recurvirostra americana), black-necked stilt (Himantopus mexicanus), and Forster’s tern (Sterna forsteri). Egg densities (g/cm3) estimated for whole eggs (1.056 ± 0.003) were higher than egg densities estimated for egg contents (1.024 ± 0.001), and were 1.059 ± 0.001 and 1.025 ± 0.001 for avocets, 1.056 ± 0.001 and 1.023 ± 0.001 for stilts, and 1.053 ± 0.002 and 1.025 ± 0.002 for terns. The egg-shape coefficients for egg volume (K v ) and egg mass (K w ) also differed depending on whether the eggshell was included (K v = 0.491 ± 0.001; K w = 0.518 ± 0.001) or excluded (K v = 0.493 ± 0.001; K w = 0.505 ± 0.001), and varied among species. Although egg contaminant concentrations are rarely meant to include the eggshell, we show that the typical inclusion of the eggshell in egg density and egg volume estimates results in egg contaminant concentrations being underestimated by 6–13 %. Our results demonstrate that the inclusion of the eggshell significantly influences estimates of egg density, egg volume, and fresh egg mass, which leads to egg contaminant concentrations that are biased low. We suggest that egg contaminant concentrations be calculated on a fresh wet weight basis using only internal egg-content densities, volumes, and masses appropriate for the species. For the three waterbirds in our study, these corrected coefficients are 1.024 ± 0.001 for egg density, 0.493 ± 0.001 for K v , and 0.505 ± 0.001 for K w .

  11. Volume-translated cubic EoS and PC-SAFT density models and a free volume-based viscosity model for hydrocarbons at extreme temperature and pressure conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgess, Ward A.; Tapriyal, Deepak; Morreale, Bryan D.

    2013-12-01

    This research focuses on providing the petroleum reservoir engineering community with robust models of hydrocarbon density and viscosity at the extreme temperature and pressure conditions (up to 533 K and 276 MPa, respectively) characteristic of ultra-deep reservoirs, such as those associated with the deepwater wells in the Gulf of Mexico. Our strategy is to base the volume-translated (VT) Peng–Robinson (PR) and Soave–Redlich–Kwong (SRK) cubic equations of state (EoSs) and perturbed-chain, statistical associating fluid theory (PC-SAFT) on an extensive data base of high temperature (278–533 K), high pressure (6.9–276 MPa) density rather than fitting the models to low pressure saturated liquidmore » density data. This high-temperature, high-pressure (HTHP) data base consists of literature data for hydrocarbons ranging from methane to C{sub 40}. The three new models developed in this work, HTHP VT-PR EoS, HTHP VT-SRK EoS, and hybrid PC-SAFT, yield mean absolute percent deviation values (MAPD) for HTHP hydrocarbon density of ~2.0%, ~1.5%, and <1.0%, respectively. An effort was also made to provide accurate hydrocarbon viscosity models based on literature data. Viscosity values are estimated with the frictional theory (f-theory) and free volume (FV) theory of viscosity. The best results were obtained when the PC-SAFT equation was used to obtain both the attractive and repulsive pressure inputs to f-theory, and the density input to FV theory. Both viscosity models provide accurate results at pressures to 100 MPa but experimental and model results can deviate by more than 25% at pressures above 200 MPa.« less

  12. Physiological considerations in applying laboratory-determined buoyant densities to predictions of bacterial and protozoan transport in groundwater: Results of in-situ and laboratory tests

    USGS Publications Warehouse

    Harvey, R.W.; Metge, D.W.; Kinner, N.; Mayberry, N.

    1997-01-01

    Buoyant densities were determined for groundwater bacteria and microflagellates (protozoa) from a sandy aquifer (Cape Cod, MA) using two methods: (1) density-gradient centrifugation (DGC) and (2) Stoke's law approximations using sedimentation rates observed during natural-gradient injection and recovery tests. The dwarf (average cell size, 0.3 ??m), unattached bacteria inhabiting a pristine zone just beneath the water table and a majority (~80%) of the morphologically diverse community of free- living bacteria inhabiting a 5-km-long plume of organically-contaminated groundwater had DGC-determined buoyant densities <1.019 g/cm3 before culturing. In the aquifer, sinking rates for the uncultured 2-??m size class of contaminant plume bacteria were comparable to that of the bromide tracer (1.9 x 10-3 M), also suggesting a low buoyant density. Culturing groundwater bacteria resulted in larger (0.8-1.3 ??m), less neutrally- buoyant (1.043-1.081 g/cm3) cells with potential sedimentation rates up to 64-fold higher than those predicted for the uncultured populations. Although sedimentation generally could be neglected in predicting subsurface transport for the community of free-living groundwater bacteria, it appeared to be important for the cultured isolates, at least until they readapt to aquifer conditions. Culturing-induced alterations in size of the contaminant-plume microflagellates (2-3 ??m) were ameliorated by using a lower nutrient, acidic (pH 5) porous growth medium. Buoyant densities of the cultured microflagellates were low, i.e., 1.024-1.034 g/cm3 (using the DGC assay) and 1.017-1.039 g/cm3 (estimated from in-situ sedimentation rates), suggesting good potential for subsurface transport under favorable conditions.

  13. Comments on the compatibility of thermodynamic equilibrium conditions with lattice propagators

    NASA Astrophysics Data System (ADS)

    Canfora, Fabrizio; Giacomini, Alex; Pais, Pablo; Rosa, Luigi; Zerwekh, Alfonso

    2016-08-01

    In this paper the compatibility is analyzed of the non-perturbative equations of state of quarks and gluons arising from the lattice with some natural requirements for self-gravitating objects at equilibrium: the existence of an equation of state (namely, the possibility to define the pressure as a function of the energy density), the absence of superluminal propagation and Le Chatelier's principle. It is discussed under which conditions it is possible to extract an equation of state (in the above sense) from the non-perturbative propagators arising from the fits of the latest lattice data. In the quark case, there is a small but non-vanishing range of temperatures in which it is not possible to define a single-valued functional relation between density and pressure. Interestingly enough, a small change of the parameters appearing in the fit of the lattice quark propagator (of around 10 %) could guarantee the fulfillment of all the three conditions (keeping alive, at the same time, the violation of positivity of the spectral representation, which is the expected signal of confinement). As far as gluons are concerned, the analysis shows very similar results. Whether or not the non-perturbative quark and gluon propagators satisfy these conditions can have a strong impact on the estimate of the maximal mass of quark stars.

  14. Comparison of accelerometer data calibration methods used in thermospheric neutral density estimation

    NASA Astrophysics Data System (ADS)

    Vielberg, Kristin; Forootan, Ehsan; Lück, Christina; Löcher, Anno; Kusche, Jürgen; Börger, Klaus

    2018-05-01

    Ultra-sensitive space-borne accelerometers on board of low Earth orbit (LEO) satellites are used to measure non-gravitational forces acting on the surface of these satellites. These forces consist of the Earth radiation pressure, the solar radiation pressure and the atmospheric drag, where the first two are caused by the radiation emitted from the Earth and the Sun, respectively, and the latter is related to the thermospheric density. On-board accelerometer measurements contain systematic errors, which need to be mitigated by applying a calibration before their use in gravity recovery or thermospheric neutral density estimations. Therefore, we improve, apply and compare three calibration procedures: (1) a multi-step numerical estimation approach, which is based on the numerical differentiation of the kinematic orbits of LEO satellites; (2) a calibration of accelerometer observations within the dynamic precise orbit determination procedure and (3) a comparison of observed to modeled forces acting on the surface of LEO satellites. Here, accelerometer measurements obtained by the Gravity Recovery And Climate Experiment (GRACE) are used. Time series of bias and scale factor derived from the three calibration procedures are found to be different in timescales of a few days to months. Results are more similar (statistically significant) when considering longer timescales, from which the results of approach (1) and (2) show better agreement to those of approach (3) during medium and high solar activity. Calibrated accelerometer observations are then applied to estimate thermospheric neutral densities. Differences between accelerometer-based density estimations and those from empirical neutral density models, e.g., NRLMSISE-00, are observed to be significant during quiet periods, on average 22 % of the simulated densities (during low solar activity), and up to 28 % during high solar activity. Therefore, daily corrections are estimated for neutral densities derived from NRLMSISE-00. Our results indicate that these corrections improve model-based density simulations in order to provide density estimates at locations outside the vicinity of the GRACE satellites, in particular during the period of high solar/magnetic activity, e.g., during the St. Patrick's Day storm on 17 March 2015.

  15. MPN estimation of qPCR target sequence recoveries from whole cell calibrator samples

    EPA Science Inventory

    DNA extracts from enumerated target organism cells (calibrator samples) have been used for estimating Enterococcus cell equivalent densities in surface waters by a comparative cycle threshold (Ct) qPCR analysis method. To compare surface water Enterococcus density estimates from ...

  16. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  17. Development of ITER non-activation phase operation scenarios

    DOE PAGES

    Kim, S. H.; Poli, F. M.; Koechl, F.; ...

    2017-06-29

    Non-activation phase operations in ITER in hydrogen (H) and helium (He) will be important for commissioning of tokamak systems, such as diagnostics, heating and current drive (HCD) systems, coils and plasma control systems, and for validation of techniques necessary for establishing operations in DT. The assessment of feasible HCD schemes at various toroidal fields (2.65–5.3 T) has revealed that the previously applied assumptions need to be refined for the ITER non-activation phase H/He operations. A study of the ranges of plasma density and profile shape using the JINTRAC suite of codes has indicated that the hydrogen pellet fuelling into Hemore » plasmas should be utilized taking the optimization of IC power absorption, neutral beam shine-through density limit and H-mode access into account. The EPED1 estimation of the edge pedestal parameters has been extended to various H operation conditions, and the combined EPED1 and SOLPS estimation has provided guidance for modelling the edge pedestal in H/He operations. The availability of ITER HCD schemes, ranges of achievable plasma density and profile shape, and estimation of the edge pedestal parameters for H/He plasmas have been integrated into various time-dependent tokamak discharge simulations. In this paper, various H/He scenarios at a wide range of plasma current (7.5–15 MA) and field (2.65–5.3 T) have been developed for the ITER non-activation phase operation, and the sensitivity of the developed scenarios to the used assumptions has been investigated to provide guidance for further development.« less

  18. Applying complex models to poultry production in the future--economics and biology.

    PubMed

    Talpaz, H; Cohen, M; Fancher, B; Halley, J

    2013-09-01

    The ability to determine the optimal broiler feed nutrient density that maximizes margin over feeding cost (MOFC) has obvious economic value. To determine optimal feed nutrient density, one must consider ingredient prices, meat values, the product mix being marketed, and the projected biological performance. A series of 8 feeding trials was conducted to estimate biological responses to changes in ME and amino acid (AA) density. Eight different genotypes of sex-separate reared broilers were fed diets varying in ME (2,723-3,386 kcal of ME/kg) and AA (0.89-1.65% digestible lysine with all essential AA acids being indexed to lysine) levels. Broilers were processed to determine carcass component yield at many different BW (1.09-4.70 kg). Trial data generated were used in model constructed to discover the dietary levels of ME and AA that maximize MOFC on a per broiler or per broiler annualized basis (bird × number of cycles/year). The model was designed to estimate the effects of dietary nutrient concentration on broiler live weight, feed conversion, mortality, and carcass component yield. Estimated coefficients from the step-wise regression process are subsequently used to predict the optimal ME and AA concentrations that maximize MOFC. The effects of changing feed or meat prices across a wide spectrum on optimal ME and AA levels can be evaluated via parametric analysis. The model can rapidly compare both biological and economic implications of changing from current practice to the simulated optimal solution. The model can be exploited to enhance decision making under volatile market conditions.

  19. Development of ITER non-activation phase operation scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S. H.; Poli, F. M.; Koechl, F.

    Non-activation phase operations in ITER in hydrogen (H) and helium (He) will be important for commissioning of tokamak systems, such as diagnostics, heating and current drive (HCD) systems, coils and plasma control systems, and for validation of techniques necessary for establishing operations in DT. The assessment of feasible HCD schemes at various toroidal fields (2.65–5.3 T) has revealed that the previously applied assumptions need to be refined for the ITER non-activation phase H/He operations. A study of the ranges of plasma density and profile shape using the JINTRAC suite of codes has indicated that the hydrogen pellet fuelling into Hemore » plasmas should be utilized taking the optimization of IC power absorption, neutral beam shine-through density limit and H-mode access into account. The EPED1 estimation of the edge pedestal parameters has been extended to various H operation conditions, and the combined EPED1 and SOLPS estimation has provided guidance for modelling the edge pedestal in H/He operations. The availability of ITER HCD schemes, ranges of achievable plasma density and profile shape, and estimation of the edge pedestal parameters for H/He plasmas have been integrated into various time-dependent tokamak discharge simulations. In this paper, various H/He scenarios at a wide range of plasma current (7.5–15 MA) and field (2.65–5.3 T) have been developed for the ITER non-activation phase operation, and the sensitivity of the developed scenarios to the used assumptions has been investigated to provide guidance for further development.« less

  20. The First Estimates of Marbled Cat Pardofelis marmorata Population Density from Bornean Primary and Selectively Logged Forest.

    PubMed

    Hearn, Andrew J; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T B; Macdonald, David W

    2016-01-01

    The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches.

  1. The First Estimates of Marbled Cat Pardofelis marmorata Population Density from Bornean Primary and Selectively Logged Forest

    PubMed Central

    Hearn, Andrew J.; Ross, Joanna; Bernard, Henry; Bakar, Soffian Abu; Hunter, Luke T. B.; Macdonald, David W.

    2016-01-01

    The marbled cat Pardofelis marmorata is a poorly known wild cat that has a broad distribution across much of the Indomalayan ecorealm. This felid is thought to exist at low population densities throughout its range, yet no estimates of its abundance exist, hampering assessment of its conservation status. To investigate the distribution and abundance of marbled cats we conducted intensive, felid-focused camera trap surveys of eight forest areas and two oil palm plantations in Sabah, Malaysian Borneo. Study sites were broadly representative of the range of habitat types and the gradient of anthropogenic disturbance and fragmentation present in contemporary Sabah. We recorded marbled cats from all forest study areas apart from a small, relatively isolated forest patch, although photographic detection frequency varied greatly between areas. No marbled cats were recorded within the plantations, but a single individual was recorded walking along the forest/plantation boundary. We collected sufficient numbers of marbled cat photographic captures at three study areas to permit density estimation based on spatially explicit capture-recapture analyses. Estimates of population density from the primary, lowland Danum Valley Conservation Area and primary upland, Tawau Hills Park, were 19.57 (SD: 8.36) and 7.10 (SD: 1.90) individuals per 100 km2, respectively, and the selectively logged, lowland Tabin Wildlife Reserve yielded an estimated density of 10.45 (SD: 3.38) individuals per 100 km2. The low detection frequencies recorded in our other survey sites and from published studies elsewhere in its range, and the absence of previous density estimates for this felid suggest that our density estimates may be from the higher end of their abundance spectrum. We provide recommendations for future marbled cat survey approaches. PMID:27007219

  2. Simulation study of a geometric shape factor technique for estimating earth-emitted radiant flux densities from wide-field-of-view radiation measurements

    NASA Technical Reports Server (NTRS)

    Weaver, W. L.; Green, R. N.

    1980-01-01

    Geometric shape factors were computed and applied to satellite simulated irradiance measurements to estimate Earth emitted flux densities for global and zonal scales and for areas smaller than the detector field of view (FOV). Wide field of view flat plate detectors were emphasized, but spherical detectors were also studied. The radiation field was modeled after data from the Nimbus 2 and 3 satellites. At a satellite altitude of 600 km, zonal estimates were in error 1.0 to 1.2 percent and global estimates were in error less than 0.2 percent. Estimates with unrestricted field of view (UFOV) detectors were about the same for Lambertian and limb darkening radiation models. The opposite was found for restricted field of view detectors. The UFOV detectors are found to be poor estimators of flux density from the total FOV and are shown to be much better as estimators of flux density from a circle centered at the FOV with an area significantly smaller than that for the total FOV.

  3. Density variations and their influence on carbon stocks: case-study on two Biosphere Reserves in the Democratic Republic of Congo

    NASA Astrophysics Data System (ADS)

    De Ridder, Maaike; De Haulleville, Thalès; Kearsley, Elizabeth; Van den Bulcke, Jan; Van Acker, Joris; Beeckman, Hans

    2014-05-01

    It is commonly acknowledged that allometric equations for aboveground biomass and carbon stock estimates are improved significantly if density is included as a variable. However, not much attention is given to this variable in terms of exact, measured values and density profiles from pith to bark. Most published case-studies obtain density values from literature sources or databases, this way using large ranges of density values and possible causing significant errors in carbon stock estimates. The use of one single fixed value for density is also not recommended if carbon stock increments are estimated. Therefore, our objective is to measure and analyze a large number of tree species occurring in two Biosphere Reserves (Luki and Yangambi). Nevertheless, the diversity of tree species in these tropical forests is too high to perform this kind of detailed analysis on all tree species (> 200/ha). Therefore, we focus on the most frequently encountered tree species with high abundance (trees/ha) and dominance (basal area/ha) for this study. Increment cores were scanned with a helical X-ray protocol to obtain density profiles from pith to bark. This way, we aim at dividing the tree species with a distinct type of density profile into separate groups. If, e.g., slopes in density values from pith to bark remain stable over larger samples of one tree species, this slope could also be used to correct for errors in carbon (increment) estimates, caused by density values from simplified density measurements or density values from literature. In summary, this is most likely the first study in the Congo Basin that focuses on density patterns in order to check their influence on carbon stocks and differences in carbon stocking based on species composition (density profiles ~ temperament of tree species).

  4. Planetary Probe Entry Atmosphere Estimation Using Synthetic Air Data System

    NASA Technical Reports Server (NTRS)

    Karlgaard, Chris; Schoenenberger, Mark

    2017-01-01

    This paper develops an atmospheric state estimator based on inertial acceleration and angular rate measurements combined with an assumed vehicle aerodynamic model. The approach utilizes the full navigation state of the vehicle (position, velocity, and attitude) to recast the vehicle aerodynamic model to be a function solely of the atmospheric state (density, pressure, and winds). Force and moment measurements are based on vehicle sensed accelerations and angular rates. These measurements are combined with an aerodynamic model and a Kalman-Schmidt filter to estimate the atmospheric conditions. The new method is applied to data from the Mars Science Laboratory mission, which landed the Curiosity rover on the surface of Mars in August 2012. The results of the new estimation algorithm are compared with results from a Flush Air Data Sensing algorithm based on onboard pressure measurements on the vehicle forebody. The comparison indicates that the new proposed estimation method provides estimates consistent with the air data measurements, without the use of pressure measurements. Implications for future missions such as the Mars 2020 entry capsule are described.

  5. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  6. Cheap DECAF: Density Estimation for Cetaceans from Acoustic Fixed Sensors Using Separate, Non-Linked Devices

    DTIC Science & Technology

    2015-09-30

    interpolation was used to estimate fin whale density in between the hydrophone locations , and the result plotted as a density image. This was repeated every 5...singing fin whale density throughout the year for the study location off Portugal. Color indicates whale density, with calibration scale at right; yellow...spots are hydrophone locations ; timeline at top indicates the time of year; circle at lower right is 1000 km 2 , the area used in the unit of whale

  7. A high density field reversed configuration (FRC) target for magnetized target fusion: First internal profile measurements of a high density FRC

    NASA Astrophysics Data System (ADS)

    Intrator, T.; Zhang, S. Y.; Degnan, J. H.; Furno, I.; Grabowski, C.; Hsu, S. C.; Ruden, E. L.; Sanchez, P. G.; Taccetti, J. M.; Tuszewski, M.; Waganaar, W. J.; Wurden, G. A.

    2004-05-01

    Magnetized target fusion (MTF) is a potentially low cost path to fusion, intermediate in plasma regime between magnetic and inertial fusion energy. It requires compression of a magnetized target plasma and consequent heating to fusion relevant conditions inside a converging flux conserver. To demonstrate the physics basis for MTF, a field reversed configuration (FRC) target plasma has been chosen that will ultimately be compressed within an imploding metal liner. The required FRC will need large density, and this regime is being explored by the FRX-L (FRC-Liner) experiment. All theta pinch formed FRCs have some shock heating during formation, but FRX-L depends further on large ohmic heating from magnetic flux annihilation to heat the high density (2-5×1022m-3), plasma to a temperature of Te+Ti≈500 eV. At the field null, anomalous resistivity is typically invoked to characterize the resistive like flux dissipation process. The first resistivity estimate for a high density collisional FRC is shown here. The flux dissipation process is both a key issue for MTF and an important underlying physics question.

  8. Weld defect identification in friction stir welding using power spectral density

    NASA Astrophysics Data System (ADS)

    Das, Bipul; Pal, Sukhomay; Bag, Swarup

    2018-04-01

    Power spectral density estimates are powerful in extraction of useful information retained in signal. In the current research work classical periodogram and Welch periodogram algorithms are used for the estimation of power spectral density for vertical force signal and transverse force signal acquired during friction stir welding process. The estimated spectral densities reveal notable insight in identification of defects in friction stir welded samples. It was observed that higher spectral density against each process signals is a key indication in identifying the presence of possible internal defects in the welded samples. The developed methodology can offer preliminary information regarding presence of internal defects in friction stir welded samples can be best accepted as first level of safeguard in monitoring the friction stir welding process.

  9. Recursive estimators of mean-areal and local bias in precipitation products that account for conditional bias

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Seo, Dong-Jun

    2017-03-01

    This paper presents novel formulations of Mean field bias (MFB) and local bias (LB) correction schemes that incorporate conditional bias (CB) penalty. These schemes are based on the operational MFB and LB algorithms in the National Weather Service (NWS) Multisensor Precipitation Estimator (MPE). By incorporating CB penalty in the cost function of exponential smoothers, we are able to derive augmented versions of recursive estimators of MFB and LB. Two extended versions of MFB algorithms are presented, one incorporating spatial variation of gauge locations only (MFB-L), and the second integrating both gauge locations and CB penalty (MFB-X). These two MFB schemes and the extended LB scheme (LB-X) are assessed relative to the original MFB and LB algorithms (referred to as MFB-O and LB-O, respectively) through a retrospective experiment over a radar domain in north-central Texas, and through a synthetic experiment over the Mid-Atlantic region. The outcome of the former experiment indicates that introducing the CB penalty to the MFB formulation leads to small, but consistent improvements in bias and CB, while its impacts on hourly correlation and Root Mean Square Error (RMSE) are mixed. Incorporating CB penalty in LB formulation tends to improve the RMSE at high rainfall thresholds, but its impacts on bias are also mixed. The synthetic experiment suggests that beneficial impacts are more conspicuous at low gauge density (9 per 58,000 km2), and tend to diminish at higher gauge density. The improvement at high rainfall intensity is partly an outcome of the conservativeness of the extended LB scheme. This conservativeness arises in part from the more frequent presence of negative eigenvalues in the extended covariance matrix which leads to no, or smaller incremental changes to the smoothed rainfall amounts.

  10. An evaluation and comparison of conservation guidelines for an at-risk migratory songbird

    USGS Publications Warehouse

    McNeil, Darin J.; Aldinger, Kyle R.; Bakermans, Marja H.; Lehman, Justin A.; Tisdale, Anna C.; Jones, John A.; Wood, Petra B.; Buehler, David A.; Smalling, Curtis G.; Siefferman, Lynn; Larkin, Jeffrey L.

    2017-01-01

    For at-risk wildlife species, it is important to consider conservation within the process of adaptive management. Golden-winged Warblers (Vermivora chrysoptera) are Neotropical migratory songbirds that are experiencing long-term population declines due in part to the loss of early-successional nesting habitat. Recently-developed Golden-winged Warbler habitat management guidelines are being implemented by USDA: Natural Resource Conservation Service (2014) and its partners through the Working Lands For Wildlife (WLFW) program. During 2012–2014, we studied the nesting ecology of Golden-winged Warblers in managed habitats of the eastern US that conformed to WLFW conservation practices. We evaluated five NRCS “management scenarios” with respect to nesting success and attainment of recommended nest site vegetation conditions outlined in the Golden-winged Warbler breeding habitat guidelines. Using estimates of territory density, pairing rate, nest survival, and clutch size, we also estimated fledgling productivity (number of fledglings/ha) for each management scenario. In general, Golden-winged Warbler nest survival declined as each breeding season advanced, but nest survival was similar across management scenarios. Within each management scenario, vegetation variables had little influence on nest survival. Still, percent Rubus cover and density of >2 m tall shrubs were relevant in some management scenarios. All five management scenarios rarely attained recommended levels of nest site vegetation conditions for Golden-winged, yet nest survival was high. Fledgling productivity estimates for each management scenario ranged from 2.1 to 8.6 fledglings/10 hectares. Our results indicate that targeted habitat management for Golden-winged Warblers using a variety of management techniques on private lands has the capability to yield high nest survival and fledgling productivity, and thus have the potential to contribute to the species recovery.

  11. A Balanced Approach to Adaptive Probability Density Estimation.

    PubMed

    Kovacs, Julio A; Helmick, Cailee; Wriggers, Willy

    2017-01-01

    Our development of a Fast (Mutual) Information Matching (FIM) of molecular dynamics time series data led us to the general problem of how to accurately estimate the probability density function of a random variable, especially in cases of very uneven samples. Here, we propose a novel Balanced Adaptive Density Estimation (BADE) method that effectively optimizes the amount of smoothing at each point. To do this, BADE relies on an efficient nearest-neighbor search which results in good scaling for large data sizes. Our tests on simulated data show that BADE exhibits equal or better accuracy than existing methods, and visual tests on univariate and bivariate experimental data show that the results are also aesthetically pleasing. This is due in part to the use of a visual criterion for setting the smoothing level of the density estimate. Our results suggest that BADE offers an attractive new take on the fundamental density estimation problem in statistics. We have applied it on molecular dynamics simulations of membrane pore formation. We also expect BADE to be generally useful for low-dimensional applications in other statistical application domains such as bioinformatics, signal processing and econometrics.

  12. Evaluating analytical approaches for estimating pelagic fish biomass using simulated fish communities

    USGS Publications Warehouse

    Yule, Daniel L.; Adams, Jean V.; Warner, David M.; Hrabik, Thomas R.; Kocovsky, Patrick M.; Weidel, Brian C.; Rudstam, Lars G.; Sullivan, Patrick J.

    2013-01-01

    Pelagic fish assessments often combine large amounts of acoustic-based fish density data and limited midwater trawl information to estimate species-specific biomass density. We compared the accuracy of five apportionment methods for estimating pelagic fish biomass density using simulated communities with known fish numbers that mimic Lakes Superior, Michigan, and Ontario, representing a range of fish community complexities. Across all apportionment methods, the error in the estimated biomass generally declined with increasing effort, but methods that accounted for community composition changes with water column depth performed best. Correlations between trawl catch and the true species composition were highest when more fish were caught, highlighting the benefits of targeted trawling in locations of high fish density. Pelagic fish surveys should incorporate geographic and water column depth stratification in the survey design, use apportionment methods that account for species-specific depth differences, target midwater trawling effort in areas of high fish density, and include at least 15 midwater trawls. With relatively basic biological information, simulations of fish communities and sampling programs can optimize effort allocation and reduce error in biomass estimates.

  13. Use of GIS for estimating potential and actual forest biomass for continental South and Southeast Asia.

    Treesearch

    L. R. Iverson; S. Brown; A. Prasad; H. Mitasova; A. J. R. Gillespie; A. E. Lugo

    1994-01-01

    A geographic information system (GIS) was used to estimate total biomass and biomass density of the tropical forest in south and southeast Asia because available data from forest inventories were insufficient to extrapolate biomass-density estimates across the region.

  14. Local dark matter and dark energy as estimated on a scale of ~1 Mpc in a self-consistent way

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.; Teerikorpi, P.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.; Byrd, G. G.

    2009-12-01

    Context: Dark energy was first detected from large distances on gigaparsec scales. If it is vacuum energy (or Einstein's Λ), it should also exist in very local space. Here we discuss its measurement on megaparsec scales of the Local Group. Aims: We combine the modified Kahn-Woltjer method for the Milky Way-M 31 binary and the HST observations of the expansion flow around the Local Group in order to study in a self-consistent way and simultaneously the local density of dark energy and the dark matter mass contained within the Local Group. Methods: A theoretical model is used that accounts for the dynamical effects of dark energy on a scale of ~1 Mpc. Results: The local dark energy density is put into the range 0.8-3.7ρv (ρv is the globally measured density), and the Local Group mass lies within 3.1-5.8×1012 M⊙. The lower limit of the local dark energy density, about 4/5× the global value, is determined by the natural binding condition for the group binary and the maximal zero-gravity radius. The near coincidence of two values measured with independent methods on scales differing by ~1000 times is remarkable. The mass ~4×1012 M⊙ and the local dark energy density ~ρv are also consistent with the expansion flow close to the Local Group, within the standard cosmological model. Conclusions: One should take into account the dark energy in dynamical mass estimation methods for galaxy groups, including the virial theorem. Our analysis gives new strong evidence in favor of Einstein's idea of the universal antigravity described by the cosmological constant.

  15. Improving CTIPe neutral density response and recovery during geomagnetic storms

    NASA Astrophysics Data System (ADS)

    Fedrizzi, M.; Fuller-Rowell, T. J.; Codrescu, M.; Mlynczak, M. G.; Marsh, D. R.

    2013-12-01

    The temperature of the Earth's thermosphere can be substantially increased during geomagnetic storms mainly due to high-latitude Joule heating induced by magnetospheric convection and auroral particle precipitation. Thermospheric heating increases atmospheric density and the drag on low-Earth orbiting satellites. The main cooling mechanism controlling the recovery of neutral temperature and density following geomagnetic activity is infrared emission from nitric oxide (NO) at 5.3 micrometers. NO is produced by both solar and auroral activity, the first due to solar EUV and X-rays the second due to dissociation of N2 by particle precipitation, and has a typical lifetime of 12 to 24 hours in the mid and lower thermosphere. NO cooling in the thermosphere peaks between 150 and 200 km altitude. In this study, a global, three-dimensional, time-dependent, non-linear coupled model of the thermosphere, ionosphere, plasmasphere, and electrodynamics (CTIPe) is used to simulate the response and recovery timescales of the upper atmosphere following geomagnetic activity. CTIPe uses time-dependent estimates of NO obtained from Marsh et al. [2004] empirical model based on Student Nitric Oxide Explorer (SNOE) satellite data rather than solving for minor species photochemistry self-consistently. This empirical model is based solely on SNOE observations, when Kp rarely exceeded 5. During conditions between Kp 5 and 9, a linear extrapolation has been used. In order to improve the accuracy of the extrapolation algorithm, CTIPe model estimates of global NO cooling have been compared with the NASA TIMED/SABER satellite measurements of radiative power at 5.3 micrometers. The comparisons have enabled improvement in the timescale for neutral density response and recovery during geomagnetic storms. CTIPe neutral density response and recovery rates are verified by comparison CHAMP satellite observations.

  16. Separating the effects of intra- and interspecific age-structured interactions in an experimental fish assemblage

    USGS Publications Warehouse

    Taylor, R.C.; Trexler, J.C.; Loftus, W.F.

    2001-01-01

    We documented patterns of age-structured biotic interactions in four mesocosm experiments with an assemblage of three species of co-occurring fishes from the Florida Everglades, the eastern mosquitofish (Gambusia holbrooki), sailfin molly (Poecilia latipinna), and bluefin killifish (Lucania goodei). These species were chosen based on their high abundance and overlapping diets. Juvenile mosquitofish and sailfin mollies, at a range of densities matching field estimates, were maintained in the presence of adult mosquitofish, sailfin mollies, and bluefin killifish to test for effects of competition and predation on juvenile survival and growth. The mesocosms held 1,200 1 of water and all conditions were set to simulate those in Shark River Slough, Everglades National Park (ENP), USA. We placed floating mats of periphyton and bladderwort in each tank in standard volumes that matched field values to provide cover and to introduce invertebrate prey. Of 15 possible intra- and interspecific age-structured interactions, we found 7 to be present at the densities of these fish found in Shark River Slough marshes. Predation by adult mosquitofish on juvenile fish, including conspecifics, was the strongest effect observed. We also observed growth limitation in mosquitofish and sailfin molly juveniles from intra- and interspecific competition. When maintained at high densities, juvenile mosquitofish changed their diets to include more cladocerans and fewer chironomid larvae relative to low densities. We estimated size-specific gape limitation by adult mosquitofish when consuming juvenile mosquitofish and sailfin mollies. At high field densities, intraspecific competition might prolong the time period when juveniles are vulnerable to predation by adult mosquitofish. These results suggest that path analysis, or other techniques used to document food-web interactions, must include age-specific roles of these fishes.

  17. Use of sibling relationship reconstruction to complement traditional monitoring in fisheries management and conservation of brown trout.

    PubMed

    Ozerov, Mikhail; Jürgenstein, Tauno; Aykanat, Tutku; Vasemägi, Anti

    2015-08-01

    Declining trends in the abundance of many fish urgently call for more efficient and informative monitoring methods that would provide necessary demographic data for the evaluation of existing conservation, restoration, and management actions. We investigated how genetic sibship reconstruction from young-of-the-year brown trout (Salmo trutta L.) juveniles provides valuable, complementary demographic information that allowed us to disentangle the effects of habitat quality and number of breeders on juvenile density. We studied restored (n = 15) and control (n = 15) spawning and nursery habitats in 16 brown trout rivers and streams over 2 consecutive years to evaluate the effectiveness of habitat restoration activities. Similar juvenile densities both in restored and control spawning and nursery grounds were observed. Similarly, no differences in the effective number of breeders, Nb(SA) , were detected between habitats, indicating that brown trout readily used recently restored spawning grounds. Only a weak relationship between the Nb(SA) and juvenile density was observed, suggesting that multiple factors affect juvenile abundance. In some areas, very low estimates of Nb(SA) were found at sites with high juvenile density, indicating that a small number of breeders can produce a high number of progeny in favorable conditions. In other sites, high Nb(SA) estimates were associated with low juvenile density, suggesting low habitat quality or lack of suitable spawning substrate in relation to available breeders. Based on these results, we recommend the incorporation of genetic sibship reconstruction to ongoing and future fish evaluation and monitoring programs to gain novel insights into local demographic and evolutionary processes relevant for fisheries management, habitat restoration, and conservation. © 2015 Society for Conservation Biology.

  18. A body composition model to estimate mammalian energy stores and metabolic rates from body mass and body length, with application to polar bears.

    PubMed

    Molnár, Péter K; Klanjscek, Tin; Derocher, Andrew E; Obbard, Martyn E; Lewis, Mark A

    2009-08-01

    Many species experience large fluctuations in food availability and depend on energy from fat and protein stores for survival, reproduction and growth. Body condition and, more specifically, energy stores thus constitute key variables in the life history of many species. Several indices exist to quantify body condition but none can provide the amount of stored energy. To estimate energy stores in mammals, we propose a body composition model that differentiates between structure and storage of an animal. We develop and parameterize the model specifically for polar bears (Ursus maritimus Phipps) but all concepts are general and the model could be easily adapted to other mammals. The model provides predictive equations to estimate structural mass, storage mass and storage energy from an appropriately chosen measure of body length and total body mass. The model also provides a means to estimate basal metabolic rates from body length and consecutive measurements of total body mass. Model estimates of body composition, structural mass, storage mass and energy density of 970 polar bears from Hudson Bay were consistent with the life history and physiology of polar bears. Metabolic rate estimates of fasting adult males derived from the body composition model corresponded closely to theoretically expected and experimentally measured metabolic rates. Our method is simple, non-invasive and provides considerably more information on the energetic status of individuals than currently available methods.

  19. Practical sampling plans for Varroa destructor (Acari: Varroidae) in Apis mellifera (Hymenoptera: Apidae) colonies and apiaries.

    PubMed

    Lee, K V; Moon, R D; Burkness, E C; Hutchison, W D; Spivak, M

    2010-08-01

    The parasitic mite Varroa destructor Anderson & Trueman (Acari: Varroidae) is arguably the most detrimental pest of the European-derived honey bee, Apis mellifera L. Unfortunately, beekeepers lack a standardized sampling plan to make informed treatment decisions. Based on data from 31 commercial apiaries, we developed sampling plans for use by beekeepers and researchers to estimate the density of mites in individual colonies or whole apiaries. Beekeepers can estimate a colony's mite density with chosen level of precision by dislodging mites from approximately to 300 adult bees taken from one brood box frame in the colony, and they can extrapolate to mite density on a colony's adults and pupae combined by doubling the number of mites on adults. For sampling whole apiaries, beekeepers can repeat the process in each of n = 8 colonies, regardless of apiary size. Researchers desiring greater precision can estimate mite density in an individual colony by examining three, 300-bee sample units. Extrapolation to density on adults and pupae may require independent estimates of numbers of adults, of pupae, and of their respective mite densities. Researchers can estimate apiary-level mite density by taking one 300-bee sample unit per colony, but should do so from a variable number of colonies, depending on apiary size. These practical sampling plans will allow beekeepers and researchers to quantify mite infestation levels and enhance understanding and management of V. destructor.

  20. Characteristics and carbon stable isotopes of fluids in the Southern Kerala granulites and their bearing on the source of CO2

    NASA Technical Reports Server (NTRS)

    Santosh, M.; Jackson, D. H.; Mattey, D. P.; Harris, N. B. W.

    1988-01-01

    Carbon dioxide-rich inclusions commonly occur in the banded charnockites and khondalites of southern Kerala as well as in the incipient charnockites formed by desiccation of gneisses along oriented zones. The combined high density fluid inclusion isochores and the range of thermometric estimates from mineral assemblages indicate entrapment pressures in the range of 5.4 to 6.1 Kbar. The CO2 equation of state barometry closely compares with the 5 plus or minus 1 Kbar estimate from mineral phases for the region. The isochores for the high density fluid inclusions in all the three rock types pass through the P-T domain recorded by phase equilibria, implying that carbon dioxide was the dominating ambient fluid species during peak metamorphic conditions. In order to constrain the source of fluids and to evaluate the mechanism of desiccation, researchers undertook detailed investigations of the carbon stable isotope composition of entrapped fluids. Researchers report here the results of preliminary studies in some of the classic localities in southern Kerala namely, Ponmudi, Kottavattom, Manali and Kadakamon.

  1. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  2. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys

    PubMed Central

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov–Malyshev–Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  3. The use of photographic rates to estimate densities of tigers and other cryptic mammals: a comment on misleading conclusions

    USGS Publications Warehouse

    Jennelle, C.S.; Runge, M.C.; MacKenzie, D.I.

    2002-01-01

    The search for easy-to-use indices that substitute for direct estimation of animal density is a common theme in wildlife and conservation science, but one fraught with well-known perils (Nichols & Conroy, 1996; Yoccoz, Nichols & Boulinier, 2001; Pollock et al., 2002). To establish the utility of an index as a substitute for an estimate of density, one must: (1) demonstrate a functional relationship between the index and density that is invariant over the desired scope of inference; (2) calibrate the functional relationship by obtaining independent measures of the index and the animal density; (3) evaluate the precision of the calibration (Diefenbach et al., 1994). Carbone et al. (2001) argue that the number of camera-days per photograph is a useful index of density for large, cryptic, forest-dwelling animals, and proceed to calibrate this index for tigers (Panthera tigris). We agree that a properly calibrated index may be useful for rapid assessments in conservation planning. However, Carbone et al. (2001), who desire to use their index as a substitute for density, do not adequately address the three elements noted above. Thus, we are concerned that others may view their methods as justification for not attempting directly to estimate animal densities, without due regard for the shortcomings of their approach.

  4. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  5. Compatibility of lithium plasma-facing surfaces with high edge temperatures in the Lithium Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majeski, R.; Bell, R. E.; Boyle, D. P.

    We measured high edge electron temperatures (200 eV or greater) at the wall-limited plasma boundary in the Lithium Tokamak Experiment (LTX). Flat electron temperature profiles are a long-predicted consequence of low recycling boundary conditions. Plasma density in the outer scrape-off layer is very low, 2-3 x 10(17) m(-3), consistent with a low recycling metallic lithium boundary. In spite of the high edge temperature, the core impurity content is low. Z(eff) is estimated to be similar to 1.2, with a very modest contribution (< 0.1) from lithium. Experiments are transient. Gas puffing is used to increase the plasma density. After gasmore » injection stops, the discharge density is allowed to drop, and the edge is pumped by the low recycling lithium wall. An upgrade to LTX-LTX-beta, which includes a 35A, 20 kV neutral beam injector (on loan to LTX from Tri-Alpha Energy) to provide core fueling to maintain constant density, as well as auxiliary heating, is underway. LTX-beta is briefly described.« less

  6. Compatibility of lithium plasma-facing surfaces with high edge temperatures in the Lithium Tokamak Experiment

    DOE PAGES

    Majeski, R.; Bell, R. E.; Boyle, D. P.; ...

    2017-03-20

    We measured high edge electron temperatures (200 eV or greater) at the wall-limited plasma boundary in the Lithium Tokamak Experiment (LTX). Flat electron temperature profiles are a long-predicted consequence of low recycling boundary conditions. Plasma density in the outer scrape-off layer is very low, 2-3 x 10(17) m(-3), consistent with a low recycling metallic lithium boundary. In spite of the high edge temperature, the core impurity content is low. Z(eff) is estimated to be similar to 1.2, with a very modest contribution (< 0.1) from lithium. Experiments are transient. Gas puffing is used to increase the plasma density. After gasmore » injection stops, the discharge density is allowed to drop, and the edge is pumped by the low recycling lithium wall. An upgrade to LTX-LTX-beta, which includes a 35A, 20 kV neutral beam injector (on loan to LTX from Tri-Alpha Energy) to provide core fueling to maintain constant density, as well as auxiliary heating, is underway. LTX-beta is briefly described.« less

  7. Compatibility of lithium plasma-facing surfaces with high edge temperatures in the Lithium Tokamak Experiment

    NASA Astrophysics Data System (ADS)

    Majeski, R.; Bell, R. E.; Boyle, D. P.; Kaita, R.; Kozub, T.; LeBlanc, B. P.; Lucia, M.; Maingi, R.; Merino, E.; Raitses, Y.; Schmitt, J. C.; Allain, J. P.; Bedoya, F.; Bialek, J.; Biewer, T. M.; Canik, J. M.; Buzi, L.; Koel, B. E.; Patino, M. I.; Capece, A. M.; Hansen, C.; Jarboe, T.; Kubota, S.; Peebles, W. A.; Tritz, K.

    2017-05-01

    High edge electron temperatures (200 eV or greater) have been measured at the wall-limited plasma boundary in the Lithium Tokamak Experiment (LTX). Flat electron temperature profiles are a long-predicted consequence of low recycling boundary conditions. Plasma density in the outer scrape-off layer is very low, 2-3 × 1017 m-3, consistent with a low recycling metallic lithium boundary. Despite the high edge temperature, the core impurity content is low. Zeff is estimated to be ˜1.2, with a very modest contribution (<0.1) from lithium. Experiments are transient. Gas puffing is used to increase the plasma density. After gas injection stops, the discharge density is allowed to drop, and the edge is pumped by the low recycling lithium wall. An upgrade to LTX-LTX-β, which includes a 35A, 20 kV neutral beam injector (on loan to LTX from Tri-Alpha Energy) to provide core fueling to maintain constant density, as well as auxiliary heating, is underway. LTX-β is briefly described.

  8. Contribution of Proton Capture Reactions to the Ascertained Abundance of Fluorine in the Evolved Stars of Globular Cluster M4, M22, 47 Tuc and NGC 6397

    NASA Astrophysics Data System (ADS)

    Mahanta, Upakul; Goswami, Aruna; Duorah, H. L.; Duorah, K.

    2017-12-01

    The origin of the abundance pattern and also the (anti)correlation present among the elements found in stars of globular clusters (GCs) remains unimproved until date. The proton-capture reactions are presently recognised in concert of the necessary candidates for that sort of observed behaviour in the second generation stars. We tend to propose a reaction network of a nuclear cycle namely carbon-nitrogen-oxygen-fluorine (CNOF) at evolved stellar condition since fluorine (^{19}F) is one such element which gets plagued by proton capture reactions. The stellar temperature thought about here ranges from 2× 107 to 10× 107 K and there has been an accretion occuring, with material density being 102 g/cm3 and 103 g/cm3. Such kind of temperature density conditions are probably going to be prevailing within the H-burning shell of evolved stars. The estimated abundances of ^{19}F are then matched with the info that has been determined for a few some metal-poor giants of GC M4, M22, 47 Tuc as well as NGC 6397. As far as the comparison between the observed and calculated abundances is concerned, it is found that the abundance of ^{19}F have shown an excellent agreement with the observed abundances with a correlation coefficent above 0.9, supporting the incidence of that nuclear cycle at the adopted temperature density conditions.

  9. Distribution, behavior, and condition of herbivorous fishes on coral reefs track algal resources.

    PubMed

    Tootell, Jesse S; Steele, Mark A

    2016-05-01

    Herbivore distribution can impact community structure and ecosystem function. On coral reefs, herbivores are thought to play an important role in promoting coral dominance, but how they are distributed relative to algae is not well known. Here, we evaluated whether the distribution, behavior, and condition of herbivorous fishes correlated with algal resource availability at six sites in the back reef environment of Moorea, French Polynesia. Specifically, we tested the hypotheses that increased algal turf availability would coincide with (1) increased biomass, (2) altered foraging behavior, and (3) increased energy reserves of herbivorous fishes. Fish biomass and algal cover were visually estimated along underwater transects; behavior of herbivorous fishes was quantified by observations of focal individuals; fish were collected to assess their condition; and algal turf production rates were measured on standardized tiles. The best predictor of herbivorous fish biomass was algal turf production, with fish biomass increasing with algal production. Biomass of herbivorous fishes was also negatively related to sea urchin density, suggesting competition for limited resources. Regression models including both algal turf production and urchin density explained 94 % of the variation in herbivorous fish biomass among sites spread over ~20 km. Behavioral observations of the parrotfish Chlorurus sordidus revealed that foraging area increased as algal turf cover decreased. Additionally, energy reserves increased with algal turf production, but declined with herbivorous fish density, implying that algal turf is a limited resource for this species. Our findings support the hypothesis that herbivorous fishes can spatially track algal resources on coral reefs.

  10. Density-dependent host choice by disease vectors: epidemiological implications of the ideal free distribution.

    PubMed

    Basáñez, María-Gloria; Razali, Karina; Renz, Alfons; Kelly, David

    2007-03-01

    The proportion of vector blood meals taken on humans (the human blood index, h) appears as a squared term in classical expressions of the basic reproduction ratio (R(0)) for vector-borne infections. Consequently, R(0) varies non-linearly with h. Estimates of h, however, constitute mere snapshots of a parameter that is predicted, from evolutionary theory, to vary with vector and host abundance. We test this prediction using a population dynamics model of river blindness assuming that, before initiation of vector control or chemotherapy, recorded measures of vector density and human infection accurately represent endemic equilibrium. We obtain values of h that satisfy the condition that the effective reproduction ratio (R(e)) must equal 1 at equilibrium. Values of h thus obtained decrease with vector density, decrease with the vector:human ratio and make R(0) respond non-linearly rather than increase linearly with vector density. We conclude that if vectors are less able to obtain human blood meals as their density increases, antivectorial measures may not lead to proportional reductions in R(0) until very low vector levels are achieved. Density dependence in the contact rate of infectious diseases transmitted by insects may be an important non-linear process with implications for their epidemiology and control.

  11. Production of nitrous oxide in the auroral D and E regions

    NASA Technical Reports Server (NTRS)

    Zipf, E. C.; Prasad, S. S.

    1980-01-01

    A study of nitrous oxide formation mechanisms indicates that N2O concentrations greater than 10 to the 9th per cu cm could be produced in IBC III aurora or by lower-level activity lasting for many hours, and, in favorable conditions, the N2O concentration could exceed the local nitric oxide density. An upper limit on the globally averaged N2O production rate from auroral activity is estimated at 2 x 10 to the 27th per second.

  12. Estimating food portions. Influence of unit number, meal type and energy density.

    PubMed

    Almiron-Roig, Eva; Solis-Trapala, Ivonne; Dodd, Jessica; Jebb, Susan A

    2013-12-01

    Estimating how much is appropriate to consume can be difficult, especially for foods presented in multiple units, those with ambiguous energy content and for snacks. This study tested the hypothesis that the number of units (single vs. multi-unit), meal type and food energy density disrupts accurate estimates of portion size. Thirty-two healthy weight men and women attended the laboratory on 3 separate occasions to assess the number of portions contained in 33 foods or beverages of varying energy density (1.7-26.8 kJ/g). Items included 12 multi-unit and 21 single unit foods; 13 were labelled "meal", 4 "drink" and 16 "snack". Departures in portion estimates from reference amounts were analysed with negative binomial regression. Overall participants tended to underestimate the number of portions displayed. Males showed greater errors in estimation than females (p=0.01). Single unit foods and those labelled as 'meal' or 'beverage' were estimated with greater error than multi-unit and 'snack' foods (p=0.02 and p<0.001 respectively). The number of portions of high energy density foods was overestimated while the number of portions of beverages and medium energy density foods were underestimated by 30-46%. In conclusion, participants tended to underestimate the reference portion size for a range of food and beverages, especially single unit foods and foods of low energy density and, unexpectedly, overestimated the reference portion of high energy density items. There is a need for better consumer education of appropriate portion sizes to aid adherence to a healthy diet. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. A common visual metric for approximate number and density

    PubMed Central

    Dakin, Steven C.; Tibber, Marc S.; Greenwood, John A.; Kingdom, Frederick A. A.; Morgan, Michael J.

    2011-01-01

    There is considerable interest in how humans estimate the number of objects in a scene in the context of an extensive literature on how we estimate the density (i.e., spacing) of objects. Here, we show that our sense of number and our sense of density are intertwined. Presented with two patches, observers found it more difficult to spot differences in either density or numerosity when those patches were mismatched in overall size, and their errors were consistent with larger patches appearing both denser and more numerous. We propose that density is estimated using the relative response of mechanisms tuned to low and high spatial frequencies (SFs), because energy at high SFs is largely determined by the number of objects, whereas low SF energy depends more on the area occupied by elements. This measure is biased by overall stimulus size in the same way as human observers, and by estimating number using the same measure scaled by relative stimulus size, we can explain all of our results. This model is a simple, biologically plausible common metric for perceptual number and density. PMID:22106276

  14. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.

  15. Body Density Estimates from Upper-Body Skinfold Thicknesses Compared to Air-Displacement Plethysmography

    USDA-ARS?s Scientific Manuscript database

    Technical Summary Objectives: Determine the effect of body mass index (BMI) on the accuracy of body density (Db) estimated with skinfold thickness (SFT) measurements compared to air displacement plethysmography (ADP) in adults. Subjects/Methods: We estimated Db with SFT and ADP in 131 healthy men an...

  16. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  17. Thermospheric neutral density estimates from heater-induced ion up-flow at EISCAT

    NASA Astrophysics Data System (ADS)

    Kosch, Michael; Ogawa, Yasunobu; Yamazaki, Yosuke; Vickers, Hannah; Blagoveshchenskaya, Nataly

    We exploit a recently-developed technique to estimate the upper thermospheric neutral density using measurements of ionospheric plasma parameters made by the EISCAT UHF radar during ionospheric modification experiments. Heating the electrons changes the balance between upward plasma pressure gradient and downward gravity, resulting in ion up-flow up to ~200 m/s. This field-aligned flow is retarded by collisions, which is directly related to the neutral density. Whilst the ion up-flow is consistent with the plasma pressure gradient, the estimated thermospheric neutral density depends on the assumed composition, which varies with altitude. Results in the topside ionosphere are presented.

  18. Geographical Distribution of Biomass Carbon in Tropical Southeast Asian Forests: A Database (NPD-068)

    DOE Data Explorer

    Brown, Sandra [University of Illinois, Urbana, Illinois (USA); Iverson, Louis R. [University of Illinois, Urbana, Illinois (USA); Prasad, Anantha [University of Illinois, Urbana, Illinois (USA); Beaty, Tammy W. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA); Olsen, Lisa M. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA); Cushman, Robert M. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA); Brenkert, Antoinette L. [CDIAC, Oak Ridge National Laboratory, Oak Ridge, TN (USA)

    2001-03-01

    A database was generated of estimates of geographically referenced carbon densities of forest vegetation in tropical Southeast Asia for 1980. A geographic information system (GIS) was used to incorporate spatial databases of climatic, edaphic, and geomorphological indices and vegetation to estimate potential (i.e., in the absence of human intervention and natural disturbance) carbon densities of forests. The resulting map was then modified to estimate actual 1980 carbon density as a function of population density and climatic zone. The database covers the following 13 countries: Bangladesh, Brunei, Cambodia (Campuchea), India, Indonesia, Laos, Malaysia, Myanmar (Burma), Nepal, the Philippines, Sri Lanka, Thailand, and Vietnam.

  19. Comparison of Clinical and Automated Breast Density Measurements: Implications for Risk Prediction and Supplemental Screening

    PubMed Central

    Brandt, Kathleen R.; Scott, Christopher G.; Ma, Lin; Mahmoudzadeh, Amir P.; Jensen, Matthew R.; Whaley, Dana H.; Wu, Fang Fang; Malkov, Serghei; Hruska, Carrie B.; Norman, Aaron D.; Heine, John; Shepherd, John; Pankratz, V. Shane; Kerlikowske, Karla

    2016-01-01

    Purpose To compare the classification of breast density with two automated methods, Volpara (version 1.5.0; Matakina Technology, Wellington, New Zealand) and Quantra (version 2.0; Hologic, Bedford, Mass), with clinical Breast Imaging Reporting and Data System (BI-RADS) density classifications and to examine associations of these measures with breast cancer risk. Materials and Methods In this study, 1911 patients with breast cancer and 4170 control subjects matched for age, race, examination date, and mammography machine were evaluated. Participants underwent mammography at Mayo Clinic or one of four sites within the San Francisco Mammography Registry between 2006 and 2012 and provided informed consent or a waiver for research, in compliance with HIPAA regulations and institutional review board approval. Digital mammograms were retrieved a mean of 2.1 years (range, 6 months to 6 years) before cancer diagnosis, with the corresponding clinical BI-RADS density classifications, and Volpara and Quantra density estimates were generated. Agreement was assessed with weighted κ statistics among control subjects. Breast cancer associations were evaluated with conditional logistic regression, adjusted for age and body mass index. Odds ratios, C statistics, and 95% confidence intervals (CIs) were estimated. Results Agreement between clinical BI-RADS density classifications and Volpara and Quantra BI-RADS estimates was moderate, with κ values of 0.57 (95% CI: 0.55, 0.59) and 0.46 (95% CI: 0.44, 0.47), respectively. Differences of up to 14% in dense tissue classification were found, with Volpara classifying 51% of women as having dense breasts, Quantra classifying 37%, and clinical BI-RADS assessment used to classify 43%. Clinical and automated measures showed similar breast cancer associations; odds ratios for extremely dense breasts versus scattered fibroglandular densities were 1.8 (95% CI: 1.5, 2.2), 1.9 (95% CI: 1.5, 2.5), and 2.3 (95% CI: 1.9, 2.8) for Volpara, Quantra, and BI-RADS classifications, respectively. Clinical BI-RADS assessment showed better discrimination of case status (C = 0.60; 95% CI: 0.58, 0.61) than did Volpara (C = 0.58; 95% CI: 0.56, 0.59) and Quantra (C = 0.56; 95% CI: 0.54, 0.58) BI-RADS classifications. Conclusion Automated and clinical assessments of breast density are similarly associated with breast cancer risk but differ up to 14% in the classification of women with dense breasts. This could have substantial effects on clinical practice patterns. © RSNA, 2015 Online supplemental material is available for this article. PMID:26694052

  20. Burst temperature from conditional analysis in Texas Helimak and TCABR tokamak

    NASA Astrophysics Data System (ADS)

    Pereira, F. A. C.; Hernandez, W. A.; Toufen, D. L.; Guimarães-Filho, Z. O.; Caldas, I. L.; Gentle, K. W.

    2018-04-01

    The procedure to estimate the average local temperature, density, and plasma potential by conditionally selecting points of the Langmuir probe characteristic curve is revised and applied to the study of intermittent bursts in the Texas Helimak and TCABR tokamak. The improvements made allow us to distinguish the burst temperature from the turbulent background and to study burst propagation. Thus, in Texas Helimak, we identify important differences with respect to the burst temperature measured in the top and the bottom regions of the machine. While in the bottom region the burst temperatures are almost equal to the background, the bursts in the top region are hotter than the background with the temperature peak clearly shifted with respect to the density one. On the other hand, in the TCABR tokamak, we found that there is a temperature peak simultaneously with the density one. Moreover, the radial profile of bursts in the top region of Helimak and in the edge and scrape-off layer regions of TCABR shows that in both machines, there are spatial regions where the relative difference between the burst and the background temperatures is significant: up to 25% in Texas Helimak and around 50% in TCABR. However, in Texas Helimak, there are also regions where these temperatures are almost the same.

  1. Estimating canopy bulk density and canopy base height for conifer stands in the interior Western United States using the Forest Vegetation Simulator Fire and Fuels Extension.

    Treesearch

    Seth Ex; Frederick Smith; Tara Keyser; Stephanie Rebain

    2017-01-01

    The Forest Vegetation Simulator Fire and Fuels Extension (FFE-FVS) is often used to estimate canopy bulk density (CBD) and canopy base height (CBH), which are key indicators of crown fire hazard for conifer stands in the Western United States. Estimated CBD from FFE-FVS is calculated as the maximum 4 m running mean bulk density of predefined 0.3 m thick canopy layers (...

  2. Control algorithms for aerobraking in the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Ward, Donald T.; Shipley, Buford W., Jr.

    1991-01-01

    The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.

  3. Evaluating sampling strategies for larval cisco (Coregonus artedi)

    USGS Publications Warehouse

    Myers, J.T.; Stockwell, J.D.; Yule, D.L.; Black, J.A.

    2008-01-01

    To improve our ability to assess larval cisco (Coregonus artedi) populations in Lake Superior, we conducted a study to compare several sampling strategies. First, we compared density estimates of larval cisco concurrently captured in surface waters with a 2 x 1-m paired neuston net and a 0.5-m (diameter) conical net. Density estimates obtained from the two gear types were not significantly different, suggesting that the conical net is a reasonable alternative to the more cumbersome and costly neuston net. Next, we assessed the effect of tow pattern (sinusoidal versus straight tows) to examine if propeller wash affected larval density. We found no effect of propeller wash on the catchability of larval cisco. Given the availability of global positioning systems, we recommend sampling larval cisco using straight tows to simplify protocols and facilitate straightforward measurements of volume filtered. Finally, we investigated potential trends in larval cisco density estimates by sampling four time periods during the light period of a day at individual sites. Our results indicate no significant trends in larval density estimates during the day. We conclude estimates of larval cisco density across space are not confounded by time at a daily timescale. Well-designed, cost effective surveys of larval cisco abundance will help to further our understanding of this important Great Lakes forage species.

  4. A mass-density model can account for the size-weight illusion.

    PubMed

    Wolf, Christian; Bergmann Tiest, Wouter M; Drewing, Knut

    2018-01-01

    When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object's mass, and the other from the object's density, with estimates' weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects' density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object's density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.

  5. The Principle of Energetic Consistency

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.

    2009-01-01

    A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of energetic consistency implies that, to precisely the extent that growing modes are important in data assimilation, this term is also important.

  6. A bottom up approach to on-road CO2 emissions estimates: improved spatial accuracy and applications for regional planning.

    PubMed

    Gately, Conor K; Hutyra, Lucy R; Wing, Ian Sue; Brondfield, Max N

    2013-03-05

    On-road transportation is responsible for 28% of all U.S. fossil-fuel CO2 emissions. Mapping vehicle emissions at regional scales is challenging due to data limitations. Existing emission inventories use spatial proxies such as population and road density to downscale national or state-level data. Such procedures introduce errors where the proxy variables and actual emissions are weakly correlated, and limit analysis of the relationship between emissions and demographic trends at local scales. We develop an on-road emission inventory product for Massachusetts-based on roadway-level traffic data obtained from the Highway Performance Monitoring System (HPMS). We provide annual estimates of on-road CO2 emissions at a 1 × 1 km grid scale for the years 1980 through 2008. We compared our results with on-road emissions estimates from the Emissions Database for Global Atmospheric Research (EDGAR), with the Vulcan Product, and with estimates derived from state fuel consumption statistics reported by the Federal Highway Administration (FHWA). Our model differs from FHWA estimates by less than 8.5% on average, and is within 5% of Vulcan estimates. We found that EDGAR estimates systematically exceed FHWA by an average of 22.8%. Panel regression analysis of per-mile CO2 emissions on population density at the town scale shows a statistically significant correlation that varies systematically in sign and magnitude as population density increases. Population density has a positive correlation with per-mile CO2 emissions for densities below 2000 persons km(-2), above which increasing density correlates negatively with per-mile emissions.

  7. Shape fabric development in rigid clast populations under pure shear: The influence of no-slip versus slip boundary conditions

    NASA Astrophysics Data System (ADS)

    Mulchrone, Kieran F.; Meere, Patrick A.

    2015-09-01

    Shape fabrics of elliptical objects in rocks are usually assumed to develop by passive behavior of inclusions with respect to the surrounding material leading to shape-based strain analysis methods belonging to the Rf/ϕ family. A probability density function is derived for the orientational characteristics of populations of rigid ellipses deforming in a pure shear 2D deformation with both no-slip and slip boundary conditions. Using maximum likelihood a numerical method is developed for estimating finite strain in natural populations deforming for both mechanisms. Application to a natural example indicates the importance of the slip mechanism in explaining clast shape fabrics in deformed sediments.

  8. Finite linear diffusion model for design of overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. The model has been experimentally verified using 1,1-prime-dimethylferrocene as a redox additive. The theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  9. Communication theory of quantum systems. Ph.D. Thesis, 1970

    NASA Technical Reports Server (NTRS)

    Yuen, H. P. H.

    1971-01-01

    Communication theory problems incorporating quantum effects for optical-frequency applications are discussed. Under suitable conditions, a unique quantum channel model corresponding to a given classical space-time varying linear random channel is established. A procedure is described by which a proper density-operator representation applicable to any receiver configuration can be constructed directly from the channel output field. Some examples illustrating the application of our methods to the development of optical quantum channel representations are given. Optimizations of communication system performance under different criteria are considered. In particular, certain necessary and sufficient conditions on the optimal detector in M-ary quantum signal detection are derived. Some examples are presented. Parameter estimation and channel capacity are discussed briefly.

  10. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, J.; Gardner, B.; Lucherini, M.

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.

  11. Estimating detection and density of the Andean cat in the high Andes

    USGS Publications Warehouse

    Reppucci, Juan; Gardner, Beth; Lucherini, Mauro

    2011-01-01

    The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.

  12. Multi-channel Ice Penetrating Radar Traverse for Estimates of Firn Density in the Percolation Zone, Western Greenland Ice Sheet

    NASA Astrophysics Data System (ADS)

    Meehan, T.; Osterberg, E. C.; Lewis, G.; Overly, T. B.; Hawley, R. L.; Bradford, J.; Marshall, H. P.

    2016-12-01

    To better predict the response of the Greenland Ice Sheet (GrIS) to future warming, leading edge Regional Climate Models (RCM) must be calibrated with in situ measurements of recent accumulation and melt. Mass balance estimates averaged across the entire Greenland Ice Sheet (GrIS) vary between models by more than 30 percent, and regional comparisons of mass balance reconstructions in Greenland vary by 100 percent or more. Greenland Traverse for Accumulation and Climate Studies (GreenTrACS) is a multi-year and multi-disciplinary 1700 km science traverse from Raven/Dye2 in SW Greenland, to Summit Station. Multi-offset radar measurements can provide high accuracy electromagnetic (EM) velocity estimates of the firn to within (+-) 0.002 to 0.003 m/ns. EM velocity, in turn, can be used to estimate bulk firn density. Using a mixing equation such as the CRIM Equation we use the measured EM velocity, along with the known EM velocity in air and ice, to estimate bulk density. During spring 2016, we used multi-channel 500MHz radar in a multi-offset configuration to survey more than 800 km from Raven towards summit. Preliminary radar-derived snow density estimates agree with density estimates from a firn core measurement ( 50 kg/m3), despite the lateral heterogeneity of the firn across the length of the antenna array (12 m).

  13. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  14. Modeling residence-time distribution in horizontal screw hydrolysis reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sievers, David A.; Stickel, Jonathan J.

    The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less

  15. Modeling residence-time distribution in horizontal screw hydrolysis reactors

    DOE PAGES

    Sievers, David A.; Stickel, Jonathan J.

    2017-10-12

    The dilute-acid thermochemical hydrolysis step used in the production of liquid fuels from lignocellulosic biomass requires precise residence-time control to achieve high monomeric sugar yields. Difficulty has been encountered reproducing residence times and yields when small batch reaction conditions are scaled up to larger pilot-scale horizontal auger-tube type continuous reactors. A commonly used naive model estimated residence times of 6.2-16.7 min, but measured mean times were actually 1.4-2.2 the estimates. Here, this study investigated how reactor residence-time distribution (RTD) is affected by reactor characteristics and operational conditions, and developed a method to accurately predict the RTD based on key parameters.more » Screw speed, reactor physical dimensions, throughput rate, and process material density were identified as major factors affecting both the mean and standard deviation of RTDs. The general shape of RTDs was consistent with a constant value determined for skewness. The Peclet number quantified reactor plug-flow performance, which ranged between 20 and 357.« less

  16. Analysis of redox additive-based overcharge protection for rechargeable lithium batteries

    NASA Technical Reports Server (NTRS)

    Narayanan, S. R.; Surampudi, S.; Attia, A. I.; Bankston, C. P.

    1991-01-01

    The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection, has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. Digital simulation of the overcharge experiment leads to numerical representation of the potential transients, and estimate of the influence of diffusion coefficient and interelectrode distance on the transient attainment of the steady state during overcharge. The model has been experimentally verified using 1,1-prime-dimethyl ferrocene as a redox additive. The analysis of the experimental results in terms of the theory allows the calculation of the diffusion coefficient and the formal potential of the redox couple. The model and the theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.

  17. Effects of visual cues of object density on perception and anticipatory control of dexterous manipulation.

    PubMed

    Crajé, Céline; Santello, Marco; Gordon, Andrew M

    2013-01-01

    Anticipatory force planning during grasping is based on visual cues about the object's physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object's center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object's center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object's CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.

  18. Residential traffic density and childhood leukemia risk.

    PubMed

    Von Behren, Julie; Reynolds, Peggy; Gunier, Robert B; Rull, Rudolph P; Hertz, Andrew; Urayama, Kevin Y; Kronish, Daniel; Buffler, Patricia A

    2008-09-01

    Exposures to carcinogenic compounds from vehicle exhaust may increase childhood leukemia risk, and the timing of this exposure may be important. We examined the association between traffic density and childhood leukemia risk for three time periods: birth, time of diagnosis, and lifetime average, based on complete residential history in a case-control study. Cases were rapidly ascertained from participating hospitals in northern and central California between 1995 and 2002. Controls were selected from birth records, individually matched on age, sex, race, and Hispanic ethnicity. Traffic density was calculated by estimating total vehicle miles traveled per square mile within a 500-foot (152 meter) radius area around each address. We used conditional logistic regression analyses to account for matching factors and to adjust for household income. We included 310 cases of acute lymphocytic leukemias (ALL) and 396 controls in our analysis. The odds ratio for ALL and residential traffic density above the 75th percentile, compared with subjects with zero traffic density, was 1.17 [95% confidence interval (95% CI), 0.76-1.81] for residence at diagnosis and 1.11 (95% CI, 0.70-1.78) for the residence at birth. For average lifetime traffic density, the odds ratio was 1.24 (95% CI, 0.74-2.08) for the highest exposure category. Living in areas of high traffic density during any of the exposure time periods was not associated with increased risk of childhood ALL in this study.

  19. Non-Fickian dispersion of groundwater age

    PubMed Central

    Engdahl, Nicholas B.; Ginn, Timothy R.; Fogg, Graham E.

    2014-01-01

    We expand the governing equation of groundwater age to account for non-Fickian dispersive fluxes using continuous random walks. Groundwater age is included as an additional (fifth) dimension on which the volumetric mass density of water is distributed and we follow the classical random walk derivation now in five dimensions. The general solution of the random walk recovers the previous conventional model of age when the low order moments of the transition density functions remain finite at their limits and describes non-Fickian age distributions when the transition densities diverge. Previously published transition densities are then used to show how the added dimension in age affects the governing differential equations. Depending on which transition densities diverge, the resulting models may be nonlocal in time, space, or age and can describe asymptotic or pre-asymptotic dispersion. A joint distribution function of time and age transitions is developed as a conditional probability and a natural result of this is that time and age must always have identical transition rate functions. This implies that a transition density defined for age can substitute for a density in time and this has implications for transport model parameter estimation. We present examples of simulated age distributions from a geologically based, heterogeneous domain that exhibit non-Fickian behavior and show that the non-Fickian model provides better descriptions of the distributions than the Fickian model. PMID:24976651

  20. Individual differences in transcranial electrical stimulation current density

    PubMed Central

    Russell, Michael J; Goodman, Theodore; Pierson, Ronald; Shepherd, Shane; Wang, Qiang; Groshong, Bennett; Wiley, David F

    2013-01-01

    Transcranial electrical stimulation (TCES) is effective in treating many conditions, but it has not been possible to accurately forecast current density within the complex anatomy of a given subject's head. We sought to predict and verify TCES current densities and determine the variability of these current distributions in patient-specific models based on magnetic resonance imaging (MRI) data. Two experiments were performed. The first experiment estimated conductivity from MRIs and compared the current density results against actual measurements from the scalp surface of 3 subjects. In the second experiment, virtual electrodes were placed on the scalps of 18 subjects to model simulated current densities with 2 mA of virtually applied stimulation. This procedure was repeated for 4 electrode locations. Current densities were then calculated for 75 brain regions. Comparison of modeled and measured external current in experiment 1 yielded a correlation of r = .93. In experiment 2, modeled individual differences were greatest near the electrodes (ten-fold differences were common), but simulated current was found in all regions of the brain. Sites that were distant from the electrodes (e.g. hypothalamus) typically showed two-fold individual differences. MRI-based modeling can effectively predict current densities in individual brains. Significant variation occurs between subjects with the same applied electrode configuration. Individualized MRI-based modeling should be considered in place of the 10-20 system when accurate TCES is needed. PMID:24285948

  1. An analysis of OH excited state absorption lines in DR 21 and K3-50

    NASA Astrophysics Data System (ADS)

    Jones, K. N.; Doel, R. C.; Field, D.; Gray, M. D.; Walker, R. N. F.

    1992-10-01

    We present an analysis of the OH absorption line zones observed toward the compact H II regions DR 21 and K3-50. Using as parameters the kinetic and dust temperatures, the H2 number density and the ratio of OH-H2 number densities to the velocity gradient, the model quantitatively reproduces the absorption line data for the six main line transitions in 2 Pi3/2 J = 5/2, 7/2, and 9/2. Observed upper limits for the absorption or emission in the satellite lines of 2 Pi3/2 J = 5/2 are crucial in constraining the range of derived parameters. Physical conditions derived for DR 21 show that the kinetic temperature centers around 140 K, the H2 number density around 10 exp 7/cu cm, and that the OH column density in the excited state absorption zone lies between 1 x 10 exp 15/sq cm and 2 x 10 exp 15/sq cm. Including contributions from a J = 3/2 absorption zone, the total OH column density is more than a factor of 2 lower than estimates based upon LTE (Walmsley et al., 1986). The OH absorption zone in K3-50 tends toward higher density and displays a larger column density, while the kinetic temperature is similar. For both sources, the dust temperature is found to be significantly lower than the kinetic temperature.

  2. Developing a bubble number-density paleoclimatic indicator for glacier ice

    USGS Publications Warehouse

    Spencer, M.K.; Alley, R.B.; Fitzpatrick, J.J.

    2006-01-01

    Past accumulation rate can be estimated from the measured number-density of bubbles in an ice core and the reconstructed paleotemperature, using a new technique. Density increase and grain growth in polar firn are both controlled by temperature and accumulation rate, and the integrated effects are recorded in the number-density of bubbles as the firn changes to ice. An empirical model of these processes, optimized to fit published data on recently formed bubbles, reconstructs accumulation rates using recent temperatures with an uncertainty of 41% (P < 0.05). For modern sites considered here, no statistically significant trend exists between mean annual temperature and the ratio of bubble number-density to grain number-density at the time of pore close-off; optimum modeled accumulation-rate estimates require an eventual ???2.02 ?? 0.08 (P < 0.05) bubbles per close-off grain. Bubble number-density in the GRIP (Greenland) ice core is qualitatively consistent with independent estimates for a combined temperature decrease and accumulation-rate increase there during the last 5 kyr.

  3. A method to estimate statistical errors of properties derived from charge-density modelling

    PubMed Central

    Lecomte, Claude

    2018-01-01

    Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964

  4. Influence of tree size, taxonomy, and edaphic conditions on heart rot in mixed-dipterocarp Bornean rainforests: implications for aboveground biomass estimates

    NASA Astrophysics Data System (ADS)

    Heineman, K. D.; Russo, S. E.; Baillie, I. C.; Mamit, J. D.; Chai, P. P.-K.; Chai, L.; Hindley, E. W.; Lau, B.-T.; Tan, S.; Ashton, P. S.

    2015-05-01

    Fungal decay of heartwood creates hollows and areas of reduced wood density within the stems of living trees known as heart rot. Although heart rot is acknowledged as a source of error in forest aboveground biomass estimates, there are few datasets available to evaluate the environmental controls over heart rot infection and severity in tropical forests. Using legacy and recent data from drilled, felled, and cored stems in mixed dipterocarp forests in Sarawak, Malaysian Borneo, we quantified the frequency and severity of heart rot, and used generalized linear mixed effect models to characterize the association of heart rot with tree size, wood density, taxonomy, and edaphic conditions. Heart rot was detected in 55% of felled stems > 30 cm DBH, while the detection frequency was lower for stems of the same size evaluated by non-destructive drilling (45%) and coring (23%) methods. Heart rot severity, defined as the percent stem volume lost in infected stems, ranged widely from 0.1-82.8%. Tree taxonomy explained the greatest proportion of variance in heart rot frequency and severity among the fixed and random effects evaluated in our models. Heart rot frequency, but not severity, increased sharply with tree diameter, ranging from 56% infection across all datasets in stems > 50 cm DBH to 11% in trees 10-30 cm DBH. The frequency and severity of heart rot increased significantly in soils with low pH and cation concentrations in topsoil, and heart rot was more common in tree species associated with dystrophic sandy soils than with nutrient-rich clays. When scaled to forest stands, the percent of stem biomass lost to heart rot varied significantly with soil properties, and we estimate that 7% of the forest biomass is in some stage of heart rot decay. This study demonstrates not only that heart rot is a significant source of error in forest carbon estimates, but also that it strongly covaries with soil resources, underscoring the need to account for edaphic variation in estimating carbon storage in tropical forests.

  5. Critical thresholds in sea lice epidemics: evidence, sensitivity and subcritical estimation

    PubMed Central

    Frazer, L. Neil; Morton, Alexandra; Krkošek, Martin

    2012-01-01

    Host density thresholds are a fundamental component of the population dynamics of pathogens, but empirical evidence and estimates are lacking. We studied host density thresholds in the dynamics of ectoparasitic sea lice (Lepeophtheirus salmonis) on salmon farms. Empirical examples include a 1994 epidemic in Atlantic Canada and a 2001 epidemic in Pacific Canada. A mathematical model suggests dynamics of lice are governed by a stable endemic equilibrium until the critical host density threshold drops owing to environmental change, or is exceeded by stocking, causing epidemics that require rapid harvest or treatment. Sensitivity analysis of the critical threshold suggests variation in dependence on biotic parameters and high sensitivity to temperature and salinity. We provide a method for estimating the critical threshold from parasite abundances at subcritical host densities and estimate the critical threshold and transmission coefficient for the two epidemics. Host density thresholds may be a fundamental component of disease dynamics in coastal seas where salmon farming occurs. PMID:22217721

  6. Hierarchical models for estimating density from DNA mark-recapture studies

    USGS Publications Warehouse

    Gardner, B.; Royle, J. Andrew; Wegan, M.T.

    2009-01-01

    Genetic sampling is increasingly used as a tool by wildlife biologists and managers to estimate abundance and density of species. Typically, DNA is used to identify individuals captured in an array of traps ( e. g., baited hair snares) from which individual encounter histories are derived. Standard methods for estimating the size of a closed population can be applied to such data. However, due to the movement of individuals on and off the trapping array during sampling, the area over which individuals are exposed to trapping is unknown, and so obtaining unbiased estimates of density has proved difficult. We propose a hierarchical spatial capture-recapture model which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to (via movement) and detection by traps. Detection probability is modeled as a function of each individual's distance to the trap. We applied this model to a black bear (Ursus americanus) study conducted in 2006 using a hair-snare trap array in the Adirondack region of New York, USA. We estimated the density of bears to be 0.159 bears/km2, which is lower than the estimated density (0.410 bears/km2) based on standard closed population techniques. A Bayesian analysis of the model is fully implemented in the software program WinBUGS.

  7. A hierarchical model for estimating density in camera-trap studies

    USGS Publications Warehouse

    Royle, J. Andrew; Nichols, James D.; Karanth, K.Ullas; Gopalaswamy, Arjun M.

    2009-01-01

    Estimating animal density using capture–recapture data from arrays of detection devices such as camera traps has been problematic due to the movement of individuals and heterogeneity in capture probability among them induced by differential exposure to trapping.We develop a spatial capture–recapture model for estimating density from camera-trapping data which contains explicit models for the spatial point process governing the distribution of individuals and their exposure to and detection by traps.We adopt a Bayesian approach to analysis of the hierarchical model using the technique of data augmentation.The model is applied to photographic capture–recapture data on tigers Panthera tigris in Nagarahole reserve, India. Using this model, we estimate the density of tigers to be 14·3 animals per 100 km2 during 2004.Synthesis and applications. Our modelling framework largely overcomes several weaknesses in conventional approaches to the estimation of animal density from trap arrays. It effectively deals with key problems such as individual heterogeneity in capture probabilities, movement of traps, presence of potential ‘holes’ in the array and ad hoc estimation of sample area. The formulation, thus, greatly enhances flexibility in the conduct of field surveys as well as in the analysis of data, from studies that may involve physical, photographic or DNA-based ‘captures’ of individual animals.

  8. It's what's inside that counts: egg contaminant concentrations are influenced by estimates of egg density, egg volume, and fresh egg mass.

    PubMed

    Herzog, Mark P; Ackerman, Joshua T; Eagles-Smith, Collin A; Hartman, C Alex

    2016-05-01

    In egg contaminant studies, it is necessary to calculate egg contaminant concentrations on a fresh wet weight basis and this requires accurate estimates of egg density and egg volume. We show that the inclusion or exclusion of the eggshell can influence egg contaminant concentrations, and we provide estimates of egg density (both with and without the eggshell) and egg-shape coefficients (used to estimate egg volume from egg morphometrics) for American avocet (Recurvirostra americana), black-necked stilt (Himantopus mexicanus), and Forster's tern (Sterna forsteri). Egg densities (g/cm(3)) estimated for whole eggs (1.056 ± 0.003) were higher than egg densities estimated for egg contents (1.024 ± 0.001), and were 1.059 ± 0.001 and 1.025 ± 0.001 for avocets, 1.056 ± 0.001 and 1.023 ± 0.001 for stilts, and 1.053 ± 0.002 and 1.025 ± 0.002 for terns. The egg-shape coefficients for egg volume (K v ) and egg mass (K w ) also differed depending on whether the eggshell was included (K v  = 0.491 ± 0.001; K w  = 0.518 ± 0.001) or excluded (K v  = 0.493 ± 0.001; K w  = 0.505 ± 0.001), and varied among species. Although egg contaminant concentrations are rarely meant to include the eggshell, we show that the typical inclusion of the eggshell in egg density and egg volume estimates results in egg contaminant concentrations being underestimated by 6-13 %. Our results demonstrate that the inclusion of the eggshell significantly influences estimates of egg density, egg volume, and fresh egg mass, which leads to egg contaminant concentrations that are biased low. We suggest that egg contaminant concentrations be calculated on a fresh wet weight basis using only internal egg-content densities, volumes, and masses appropriate for the species. For the three waterbirds in our study, these corrected coefficients are 1.024 ± 0.001 for egg density, 0.493 ± 0.001 for K v , and 0.505 ± 0.001 for K w .

  9. Breast percent density estimation from 3D reconstructed digital breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Bakic, Predrag R.; Kontos, Despina; Carton, Ann-Katherine; Maidment, Andrew D. A.

    2008-03-01

    Breast density is an independent factor of breast cancer risk. In mammograms breast density is quantitatively measured as percent density (PD), the percentage of dense (non-fatty) tissue. To date, clinical estimates of PD have varied significantly, in part due to the projective nature of mammography. Digital breast tomosynthesis (DBT) is a 3D imaging modality in which cross-sectional images are reconstructed from a small number of projections acquired at different x-ray tube angles. Preliminary studies suggest that DBT is superior to mammography in tissue visualization, since superimposed anatomical structures present in mammograms are filtered out. We hypothesize that DBT could also provide a more accurate breast density estimation. In this paper, we propose to estimate PD from reconstructed DBT images using a semi-automated thresholding technique. Preprocessing is performed to exclude the image background and the area of the pectoral muscle. Threshold values are selected manually from a small number of reconstructed slices; a combination of these thresholds is applied to each slice throughout the entire reconstructed DBT volume. The proposed method was validated using images of women with recently detected abnormalities or with biopsy-proven cancers; only contralateral breasts were analyzed. The Pearson correlation and kappa coefficients between the breast density estimates from DBT and the corresponding digital mammogram indicate moderate agreement between the two modalities, comparable with our previous results from 2D DBT projections. Percent density appears to be a robust measure for breast density assessment in both 2D and 3D x-ray breast imaging modalities using thresholding.

  10. Time-dependent MHD simulations of the solar wind outflow using interplanetary scintillation observations

    DOE PAGES

    Kim, Tae K.; Pogorelov, Nikolai V.; Borovikov, Sergey N.; ...

    2012-11-20

    Numerical modeling of the heliosphere is a critical component of space weather forecasting. The accuracy of heliospheric models can be improved by using realistic boundary conditions and confirming the results with in situ spacecraft measurements. To accurately reproduce the solar wind (SW) plasma flow near Earth, we need realistic, time-dependent boundary conditions at a fixed distance from the Sun. We may prepare such boundary conditions using SW speed and density determined from interplanetary scintillation (IPS) observations, magnetic field derived from photospheric magnetograms, and temperature estimated from its correlation with SW speed. In conclusion, we present here the time-dependent MHD simulationmore » results obtained by using the 2011 IPS data from the Solar-Terrestrial Environment Laboratory as time-varying inner boundary conditions and compare the simulated data at Earth with OMNI data (spacecraft-interspersed, near-Earth solar wind data).« less

  11. Estimates of Optimal Operating Conditions for Hydrogen-Oxygen Cesium-Seeded Magnetohydrodynamic Power Generator

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Nichols, L. D.

    1977-01-01

    The value of percent seed, oxygen to fuel ratio, combustion pressure, Mach number, and magnetic field strength which maximize either the electrical conductivity or power density at the entrance of an MHD power generator was obtained. The working fluid is the combustion product of H2 and O2 seeded with CsOH. The ideal theoretical segmented Faraday generator along with an empirical form found from correlating the data of many experimenters working with generators of different sizes, electrode configurations, and working fluids, are investigated. The conductivity and power densities optimize at a seed fraction of 3.5 mole percent and an oxygen to hydrogen weight ratio of 7.5. The optimum values of combustion pressure and Mach number depend on the operating magnetic field strength.

  12. Parameter dependences of the separatrix density in nitrogen seeded ASDEX Upgrade H-mode discharges

    NASA Astrophysics Data System (ADS)

    Kallenbach, A.; Sun, H. J.; Eich, T.; Carralero, D.; Hobirk, J.; Scarabosio, A.; Siccinio, M.; ASDEX Upgrade Team; EUROfusion MST1 Team

    2018-04-01

    The upstream separatrix electron density is an important interface parameter for core performance and divertor power exhaust. It has been measured in ASDEX Upgrade H-mode discharges by means of Thomson scattering using a self-consistent estimate of the upstream electron temperature under the assumption of Spitzer-Härm electron conduction. Its dependence on various plasma parameters has been tested for different plasma conditions in H-mode. The leading parameter determining n e,sep was found to be the neutral divertor pressure, which can be considered as an engineering parameter since it is determined mainly by the gas puff rate and the pumping speed. The experimentally found parameter dependence of n e,sep, which is dominated by the divertor neutral pressure, could be approximately reconciled by 2-point modelling.

  13. Improvement of a plasma uniformity of the 2nd ion source of KSTAR neutral beam injector.

    PubMed

    Jeong, S H; Kim, T S; Lee, K W; Chang, D H; In, S R; Bae, Y S

    2014-02-01

    The 2nd ion source of KSTAR (Korea Superconducting Tokamak Advanced Research) NBI (Neutral Beam Injector) had been developed and operated since last year. A calorimetric analysis revealed that the heat load of the back plate of the ion source is relatively higher than that of the 1st ion source of KSTAR NBI. The spatial plasma uniformity of the ion source is not good. Therefore, we intended to identify factors affecting the uniformity of a plasma density and improve it. We estimated the effects of a direction of filament current and a magnetic field configuration of the plasma generator on the plasma uniformity. We also verified that the operation conditions of an ion source could change a uniformity of the plasma density of an ion source.

  14. Impact of atmospheric effects on the energy reconstruction of air showers observed by the surface detectors of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Aab, A.; Abreu, P.; Aglietta, M.; Samarai, I. Al; Albuquerque, I. F. M.; Allekotte, I.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Anastasi, G. A.; Anchordoqui, L.; Andrada, B.; Andringa, S.; Aramo, C.; Arqueros, F.; Arsene, N.; Asorey, H.; Assis, P.; Aublin, J.; Avila, G.; Badescu, A. M.; Balaceanu, A.; Barreira Luz, R. J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertaina, M. E.; Bertou, X.; Biermann, P. L.; Billoir, P.; Biteau, J.; Blaess, S. G.; Blanco, A.; Blazek, J.; Bleve, C.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Borodai, N.; Botti, A. M.; Brack, J.; Brancus, I.; Bretz, T.; Bridgeman, A.; Briechle, F. L.; Buchholz, P.; Bueno, A.; Buitink, S.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, L.; Cancio, A.; Canfora, F.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Chavez, A. G.; Chinellato, J. A.; Chudoba, J.; Clay, R. W.; Colalillo, R.; Coleman, A.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Criss, A.; Cronin, J.; D'Amico, S.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Jong, S. J.; De Mauro, G.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; Debatin, J.; Deligny, O.; Di Giulio, C.; Di Matteo, A.; Díaz Castro, M. L.; Diogo, F.; Dobrigkeit, C.; D'Olivo, J. C.; dos Anjos, R. C.; Dova, M. T.; Dundovic, A.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Fick, B.; Figueira, J. M.; Filipčič, A.; Fratu, O.; Freire, M. M.; Fujii, T.; Fuster, A.; Gaior, R.; García, B.; Garcia-Pinto, D.; Gaté, F.; Gemmeke, H.; Gherghel-Lascu, A.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Głas, D.; Glaser, C.; Glass, H.; Golup, G.; Gómez Berisso, M.; Gómez Vitale, P. F.; González, N.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grillo, A. F.; Grubb, T. D.; Guarino, F.; Guedes, G. P.; Hampel, M. R.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Hasankiadeh, Q.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huege, T.; Hulsman, J.; Insolia, A.; Isar, P. G.; Jandt, I.; Jansen, S.; Johnsen, J. A.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Keilhauer, B.; Kemp, E.; Kemp, J.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Kuempel, D.; Kukec Mezek, G.; Kunka, N.; Kuotb Awad, A.; LaHurd, D.; Lauscher, M.; Lebrun, P.; Legumina, R.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; Lopes, L.; López, R.; López Casado, A.; Luce, Q.; Lucero, A.; Malacari, M.; Mallamaci, M.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Mariš, I. C.; Marsella, G.; Martello, D.; Martinez, H.; Martínez Bravo, O.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melo, D.; Menshikov, A.; Messina, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Mockler, D.; Mollerach, S.; Montanet, F.; Morello, C.; Mostafá, M.; Müller, A. L.; Müller, G.; Muller, M. A.; Müller, S.; Mussa, R.; Naranjo, I.; Nellen, L.; Neuser, J.; Nguyen, P. H.; Niculescu-Oglinzanu, M.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, H.; Núñez, L. A.; Ochilo, L.; Oikonomou, F.; Olinto, A.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Papenbreer, P.; Parente, G.; Parra, A.; Paul, T.; Pech, M.; Pedreira, F.; Pȩkala, J.; Pelayo, R.; Peña-Rodriguez, J.; Pereira, L. A. S.; Perlín, M.; Perrone, L.; Peters, C.; Petrera, S.; Phuntsok, J.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Ramos-Pollan, R.; Rautenberg, J.; Ravignani, D.; Revenu, B.; Ridky, J.; Risse, M.; Ristori, P.; Rizi, V.; Rodrigues de Carvalho, W.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rogozin, D.; Roncoroni, M. J.; Roth, M.; Roulet, E.; Rovero, A. C.; Ruehl, P.; Saffi, S. J.; Saftoiu, A.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sanabria Gomez, J. D.; Sánchez, F.; Sanchez-Lucas, P.; Santos, E. M.; Santos, E.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sarmiento, C. A.; Sato, R.; Schauer, M.; Scherini, V.; Schieler, H.; Schimp, M.; Schmidt, D.; Scholten, O.; Schovánek, P.; Schröder, F. G.; Schulz, A.; Schulz, J.; Schumacher, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sigl, G.; Silli, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sonntag, S.; Sorokin, J.; Squartini, R.; Stanca, D.; Stanič, S.; Stasielak, J.; Stassi, P.; Strafella, F.; Suarez, F.; Suarez Durán, M.; Sudholz, T.; Suomijärvi, T.; Supanitsky, A. D.; Swain, J.; Szadkowski, Z.; Taboada, A.; Taborda, O. A.; Tapia, A.; Theodoro, V. M.; Timmermans, C.; Todero Peixoto, C. J.; Tomankova, L.; Tomé, B.; Torralba Elipe, G.; Torres Machado, D.; Torri, M.; Travnicek, P.; Trini, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van Bodegom, P.; van den Berg, A. M.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Vergara Quispe, I. D.; Verzi, V.; Vicha, J.; Villaseñor, L.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weindl, A.; Wiencke, L.; Wilczyński, H.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Yang, L.; Yelos, D.; Yushkov, A.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zepeda, A.; Zimmermann, B.; Ziolkowski, M.; Zong, Z.; Zuccarello, F.

    2017-02-01

    Atmospheric conditions, such as the pressure (P), temperature (T) or air density (ρ propto P/T), affect the development of extended air showers initiated by energetic cosmic rays. We study the impact of the atmospheric variations on the reconstruction of air showers with data from the arrays of surface detectors of the Pierre Auger Observatory, considering separately the one with detector spacings of 1500 m and the one with 750 m spacing. We observe modulations in the event rates that are due to the influence of the air density and pressure variations on the measured signals, from which the energy estimators are obtained. We show how the energy assignment can be corrected to account for such atmospheric effects.

  15. Nitride surface passivation of GaAs nanowires: impact on surface state density.

    PubMed

    Alekseev, Prokhor A; Dunaevskiy, Mikhail S; Ulin, Vladimir P; Lvova, Tatiana V; Filatov, Dmitriy O; Nezhdanov, Alexey V; Mashin, Aleksander I; Berkovits, Vladimir L

    2015-01-14

    Surface nitridation by hydrazine-sulfide solution, which is known to produce surface passivation of GaAs crystals, was applied to GaAs nanowires (NWs). We studied the effect of nitridation on conductivity and microphotoluminescence (μ-PL) of individual GaAs NWs using conductive atomic force microscopy (CAFM) and confocal luminescent microscopy (CLM), respectively. Nitridation is found to produce an essential increase in the NW conductivity and the μ-PL intensity as well evidence of surface passivation. Estimations show that the nitride passivation reduces the surface state density by a factor of 6, which is of the same order as that found for GaAs/AlGaAs nanowires. The effects of the nitride passivation are also stable under atmospheric ambient conditions for six months.

  16. Testing the consistency of wildlife data types before combining them: the case of camera traps and telemetry.

    PubMed

    Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A

    2014-04-01

    Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.

  17. Estimating the Longwave Radiation Underneath the Forest Canopy in Snow-dominated Setting

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Kumar, M.; Link, T. E.

    2017-12-01

    Forest canopies alter incoming longwave radiation at the land surface, thus influencing snow cover energetics. The snow surface receives longwave radiation from the sky as well as from surrounding vegetation. The longwave radiation from trees is determined by its skin temperature, which shows significant heterogeneity depending on its position and morphometric attributes. Here our goal is to derive an effective tree temperature that can be used to estimate the longwave radiation received by the land surface pixel. To this end, we implement these three steps: 1) derive a relation between tree trunk surface temperature and the incident longwave radiation, shortwave radiation, and air temperature; 2) develop an inverse model to calculate the effective temperature by establishing a relationship between the effective temperature and the actual tree temperature; and 3) estimate the effective temperature using widely measured variables, such as solar radiation and forest density. Data used to derive aforementioned relations were obtained at the University of Idaho Experimental Forest, in northern Idaho. Tree skin temperature, incoming longwave radiation, solar radiation received by the tree surface, and air temperature were measured at an isolated tree and a tree within a homogeneous forest stand. Longwave radiation received by the land surface and the sky view factors were also measured at the same two locations. The calculated effective temperature was then compared with the measured tree trunk surface temperature. Additional longwave radiation measurements with pyrgeometer arrays were conducted under forests with different densities to evaluate the relationship between effective temperature and forest density. Our preliminary results show that when exposed to direct shortwave radiation, the tree surface temperature shows a significant difference from the air temperature. Under cloudy or shaded conditions, the tree surface temperature closely follows the air temperature. The effective tree temperature follows the air temperature in a dense forest stand, although it is significantly larger than the air temperature near the isolated tree. This discrepancy motivates us to explore ways to represent the effective tree temperature for stands with different densities.

  18. Bone strength and muscle properties in postmenopausal women with and without a recent distal radius fracture.

    PubMed

    Crockett, K; Arnold, C M; Farthing, J P; Chilibeck, P D; Johnston, J D; Bath, B; Baxter-Jones, A D G; Kontulainen, S A

    2015-10-01

    Distal radius (wrist) fracture (DRF) in women over age 50 years is an early sign of bone fragility. Women with a recent DRF compared to women without DRF demonstrated lower bone strength, muscle density, and strength, but no difference in dual-energy x-ray absorptiometry (DXA) measures, suggesting DXA alone may not be a sufficient predictor for DRF risk. The objective of this study was to investigate differences in bone and muscle properties between women with and without a recent DRF. One hundred sixty-six postmenopausal women (50-78 years) were recruited. Participants were excluded if they had taken bone-altering medications in the past 6 months or had medical conditions that severely affected daily living or the upper extremity. Seventy-seven age-matched women with a fracture in the past 6-24 months (Fx, n = 32) and without fracture (NFx, n = 45) were measured for bone and muscle properties using the nondominant (NFx) or non-fractured limb (Fx). Peripheral quantitative computed tomography (pQCT) was used to estimate bone strength in compression (BSIc) at the distal radius and tibia, bone strength in torsion (SSIp) at the shaft sites, muscle density, and area at the forearm and lower leg. Areal bone mineral density at the ultradistal forearm, spine, and femoral neck was measured by DXA. Grip strength and the 30-s chair stand test were used as estimates of upper and lower extremity muscle strength. Limb-specific between-group differences were compared using multivariate analysis of variance (MANOVA). There was a significant group difference (p < 0.05) for the forearm and lower leg, with the Fx group demonstrating 16 and 19% lower BSIc, 3 and 6% lower muscle density, and 20 and 21% lower muscle strength at the upper and lower extremities, respectively. There were no differences between groups for DXA measures. Women with recent DRF had lower pQCT-derived estimated bone strength at the distal radius and tibia and lower muscle density and strength at both extremities.

  19. Optimum Selection Age for Wood Density in Loblolly Pine

    Treesearch

    D.P. Gwaze; K.J. Harding; R.C. Purnell; Floyd E. Brigwater

    2002-01-01

    Genetic and phenotypic parameters for core wood density of Pinus taeda L. were estimated for ages ranging from 5 to 25 years at two sites in southern United States. Heritability estimates on an individual-tree basis for core density were lower than expected (0.20-0.31). Age-age genetic correlations were higher than phenotypic correlations,...

  20. Mapping tree density in forests of the southwestern USA using Landsat 8 data

    USGS Publications Warehouse

    Humagain, Kamal; Portillo-Quintero, Carlos; Cox, Robert D.; Cain, James W.

    2017-01-01

    The increase of tree density in forests of the American Southwest promotes extreme fire events, understory biodiversity losses, and degraded habitat conditions for many wildlife species. To ameliorate these changes, managers and scientists have begun planning treatments aimed at reducing fuels and increasing understory biodiversity. However, spatial variability in tree density across the landscape is not well-characterized, and if better known, could greatly influence planning efforts. We used reflectance values from individual Landsat 8 bands (bands 2, 3, 4, 5, 6, and 7) and calculated vegetation indices (difference vegetation index, simple ratios, and normalized vegetation indices) to estimate tree density in an area planned for treatment in the Jemez Mountains, New Mexico, characterized by multiple vegetation types and a complex topography. Because different vegetation types have different spectral signatures, we derived models with multiple predictor variables for each vegetation type, rather than using a single model for the entire project area, and compared the model-derived values to values collected from on-the-ground transects. Among conifer-dominated areas (73% of the project area), the best models (as determined by corrected Akaike Information Criteria (AICc)) included Landsat bands 2, 3, 4, and 7 along with simple ratios, normalized vegetation indices, and the difference vegetation index (R2 values for ponderosa: 0.47, piñon-juniper: 0.52, and spruce-fir: 0.66). On the other hand, in aspen-dominated areas (9% of the project area), the best model included individual bands 4 and 2, simple ratio, and normalized vegetation index (R2 value: 0.97). Most areas dominated by ponderosa, pinyon-juniper, or spruce-fir had more than 100 trees per hectare. About 54% of the study area has medium to high density of trees (100–1000 trees/hectare), and a small fraction (4.5%) of the area has very high density (>1000 trees/hectare). Our results provide a better understanding of tree density for identifying areas in need of treatment and planning for more effective treatment. Our analysis also provides an integrated method of estimating tree density across complex landscapes that could be useful for further restoration planning.

  1. Effect of water temperature and population density on the population dynamics of Schistosoma mansoni intermediate host snails.

    PubMed

    McCreesh, Nicky; Arinaitwe, Moses; Arineitwe, Wilber; Tukahebwa, Edridah M; Booth, Mark

    2014-11-12

    Mathematical models can be used to identify areas at risk of increased or new schistosomiasis transmission as a result of climate change. The results of these models can be very different when parameterised to different species of host snail, which have varying temperature preferences. Currently, the experimental data needed by these models are available for only a few species of snail. The choice of density-dependent functions can also affect model results, but the effects of increasing densities on Biomphalaria populations have only previously been investigated in artificial aquariums. Laboratory experiments were conducted to estimate Biomphalaria sudanica mortality, fecundity and growth rates at ten different constant water temperatures, ranging from 13-32°C. Snail cages were used to determine the effects of snail densities on B. sudanica and B. stanleyi mortality and fecundity rates in semi-natural conditions in Lake Albert. B. sudanica survival and fecundity were highest at 20°C and 22°C respectively. Growth in shell diameter was estimated to be highest at 23°C in small and medium sized snails, but the relationship between temperature and growth was not clear. The fecundity of both B. sudanica and B. stanleyi decreased by 72-75% with a four-fold increase in population density. Increasing densities four-fold also doubled B. stanleyi mortality rates, but had no effect on the survival of B. sudanica. The optimum temperature for fecundity was lower for B. sudanica than for previously studied species of Biomphalaria. In contrast to other Biomphalaria species, B. sudanica have a distinct peak temperature for survival, as opposed to a plateau of highly suitable temperatures. For both B. stanleyi and B. sudanica, fecundity decreased with increasing population densities. This means that snail populations may experience large fluctuations in numbers, even in the absence of any external factors such as seasonal temperature changes. Survival also decreased with increasing density for B. stanleyi, in contrast to B. sudanica and other studied Biomphalaria species where only fecundity has been shown to decrease.

  2. Timescales of isotropic and anisotropic cluster collapse

    NASA Astrophysics Data System (ADS)

    Bartelmann, M.; Ehlers, J.; Schneider, P.

    1993-12-01

    From a simple estimate for the formation time of galaxy clusters, Richstone et al. have recently concluded that the evidence for non-virialized structures in a large fraction of observed clusters points towards a high value for the cosmological density parameter Omega0. This conclusion was based on a study of the spherical collapse of density perturbations, assumed to follow a Gaussian probability distribution. In this paper, we extend their treatment in several respects: first, we argue that the collapse does not start from a comoving motion of the perturbation, but that the continuity equation requires an initial velocity perturbation directly related to the density perturbation. This requirement modifies the initial condition for the evolution equation and has the effect that the collapse proceeds faster than in the case where the initial velocity perturbation is set to zero; the timescale is reduced by a factor of up to approximately equal 0.5. Our results thus strengthens the conclusion of Richstone et al. for a high Omega0. In addition, we study the collapse of density fluctuations in the frame of the Zel'dovich approximation, using as starting condition the analytically known probability distribution of the eigenvalues of the deformation tensor, which depends only on the (Gaussian) width of the perturbation spectrum. Finally, we consider the anisotropic collapse of density perturbations dynamically, again with initial conditions drawn from the probability distribution of the deformation tensor. We find that in both cases of anisotropic collapse, in the Zel'dovich approximation and in the dynamical calculations, the resulting distribution of collapse times agrees remarkably well with the results from spherical collapse. We discuss this agreement and conclude that it is mainly due to the properties of the probability distribution for the eigenvalues of the Zel'dovich deformation tensor. Hence, the conclusions of Richstone et al. on the value of Omega0 can be verified and strengthened, even if a more general approach to the collapse of density perturbations is employed. A simple analytic formula for the cluster redshift distribution in an Einstein-deSitter universe is derived.

  3. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  4. Simulating nailfold capillaroscopy sequences to evaluate algorithms for blood flow estimation.

    PubMed

    Tresadern, P A; Berks, M; Murray, A K; Dinsdale, G; Taylor, C J; Herrick, A L

    2013-01-01

    The effects of systemic sclerosis (SSc)--a disease of the connective tissue causing blood flow problems that can require amputation of the fingers--can be observed indirectly by imaging the capillaries at the nailfold, though taking quantitative measures such as blood flow to diagnose the disease and monitor its progression is not easy. Optical flow algorithms may be applied, though without ground truth (i.e. known blood flow) it is hard to evaluate their accuracy. We propose an image model that generates realistic capillaroscopy videos with known flow, and use this model to quantify the effect of flow rate, cell density and contrast (among others) on estimated flow. This resource will help researchers to design systems that are robust under real-world conditions.

  5. Estimation of the Operating Characteristics When the Test Information of the Old Test Is Not Constant. II. Simple Sum Procedure of the Conditional P.D.F. Approach/Normal Approach Method Using Three Subtests of the Old Test, Number 2

    DTIC Science & Technology

    1981-07-01

    Samejima, RR-79-1), suggests that it will be more fruitful to observe the square root of an information function, rather than the information...II44 t&4 ~4J44 AJ.ISN.a -64- 0I 44 0- -J- .00 c;i 0* 0 cIJ II Ys c0 r.M A.LISN30 -65- IV-8 the estimated density functions, g*(r*) , will affect the...Yukihiro NoguchiFaculty of Education Department of Psychology University of Tokyo Elliot Hall Bongo , Bumkyoku 75 East River Road Tokyo, Japan ŕ

  6. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  7. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  8. Item Response Theory with Estimation of the Latent Density Using Davidian Curves

    ERIC Educational Resources Information Center

    Woods, Carol M.; Lin, Nan

    2009-01-01

    Davidian-curve item response theory (DC-IRT) is introduced, evaluated with simulations, and illustrated using data from the Schedule for Nonadaptive and Adaptive Personality Entitlement scale. DC-IRT is a method for fitting unidimensional IRT models with maximum marginal likelihood estimation, in which the latent density is estimated,…

  9. Population density estimated from locations of individuals on a passive detector array

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

    2009-01-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

  10. OEDGE modeling for the planned tungsten ring experiment on DIII-D

    DOE PAGES

    Elder, J. David; Stangeby, Peter C.; Abrams, Tyler W.; ...

    2017-04-19

    The OEDGE code is used to model tungsten erosion and transport for DIII-D experiments with toroidal rings of high-Z metal tiles. Such modeling is needed for both experimental and diagnostic design to have estimates of the expected core and edge tungsten density and to understand the various factors contributing to the uncertainties in these calculations. OEDGE simulations are performed using the planned experimental magnetic geometries and plasma conditions typical of both L-mode and inter-ELM H-mode discharges in DIII-D. OEDGE plasma reconstruction based on specific representative discharges for similar geometries is used to determine the plasma conditions applied to tungsten plasmamore » impurity simulations. We developed a new model for tungsten erosion in OEDGE which imports charge-state resolved carbon impurity fluxes and impact energies from a separate OEDGE run which models the carbon production, transport and deposition for the same plasma conditions as the tungsten simulations. Furthermore, these values are then used to calculate the gross tungsten physical sputtering due to carbon plasma impurities which is then added to any sputtering by deuterium ions; tungsten self-sputtering is also included. The code results are found to be dependent on the following factors: divertor geometry and closure, the choice of cross-field anomalous transport coefficients, divertor plasma conditions (affecting both tungsten source strength and transport), the choice of tungsten atomic physics data used in the model (in particular sviz(Te) for W-atoms), and the model of the carbon flux and energy used for 2 calculating the tungsten source due to sputtering. The core tungsten density is found to be of order 10 15 m -3 (excluding effects of any core transport barrier and with significant variability depending on the other factors mentioned) with density decaying into the scrape off layer.« less

  11. Relating Demographic Characteristics of a Small Mammal to Remotely Sensed Forest-Stand Condition

    PubMed Central

    Lada, Hania; Thomson, James R.; Cunningham, Shaun C.; Mac Nally, Ralph

    2014-01-01

    Many ecological systems around the world are changing rapidly in response to direct (land-use change) and indirect (climate change) human actions. We need tools to assess dynamically, and over appropriate management scales, condition of ecosystems and their responses to potential mitigation of pressures. Using a validated model, we determined whether stand condition of floodplain forests is related to densities of a small mammal (a carnivorous marsupial, Antechinus flavipes) in 60 000 ha of extant river red gum (Eucalyptus camaldulensis) forests in south-eastern Australia in 2004, 2005 and 2011. Stand condition was assessed remotely using models built from ground assessments of stand condition and satellite-derived reflectance. Other covariates, such as volumes of fallen timber, distances to floods, rainfall and life stages were included in the model. Trapping of animals was conducted at 272 plots (0.25 ha) across the region. Densities of second-year females (i.e. females that had survived to a second breeding year) and of second-year females with suckled teats (i.e. inferred to have been successful mothers) were higher in stands with the highest condition. There was no evidence of a relationship with stand condition for males or all females. These outcomes show that remotely-sensed estimates of stand condition (here floodplain forests) are relatable to some demographic characteristics of a small mammal species, and may provide useful information about the capacity of ecosystems to support animal populations. Over-regulation of large, lowland rivers has led to declines in many facets of floodplain function. If management of water resources continues as it has in recent decades, then our results suggest that there will be further deterioration in stand condition and a decreased capacity for female yellow-footed antechinuses to breed multiple times. PMID:24621967

  12. Estimating population density and connectivity of American mink using spatial capture-recapture

    USGS Publications Warehouse

    Fuller, Angela K.; Sutherland, Christopher S.; Royle, Andy; Hare, Matthew P.

    2016-01-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture–recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture–recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km2 area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture–recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  13. Scattered image artifacts from cone beam computed tomography and its clinical potential in bone mineral density estimation.

    PubMed

    Ko, Hoon; Jeong, Kwanmoon; Lee, Chang-Hoon; Jun, Hong Young; Jeong, Changwon; Lee, Myeung Su; Nam, Yunyoung; Yoon, Kwon-Ha; Lee, Jinseok

    2016-01-01

    Image artifacts affect the quality of medical images and may obscure anatomic structure and pathology. Numerous methods for suppression and correction of scattered image artifacts have been suggested in the past three decades. In this paper, we assessed the feasibility of use of information on scattered artifacts for estimation of bone mineral density (BMD) without dual-energy X-ray absorptiometry (DXA) or quantitative computed tomographic imaging (QCT). To investigate the relationship between scattered image artifacts and BMD, we first used a forearm phantom and cone-beam computed tomography. In the phantom, we considered two regions of interest-bone-equivalent solid material containing 50 mg HA per cm(-3) and water-to represent low- and high-density trabecular bone, respectively. We compared the scattered image artifacts in the high-density material with those in the low-density material. The technique was then applied to osteoporosis patients and healthy subjects to assess its feasibility for BMD estimation. The high-density material produced a greater number of scattered image artifacts than the low-density material. Moreover, the radius and ulna of healthy subjects produced a greater number of scattered image artifacts than those from osteoporosis patients. Although other parameters, such as bone thickness and X-ray incidence, should be considered, our technique facilitated BMD estimation directly without DXA or QCT. We believe that BMD estimation based on assessment of scattered image artifacts may benefit the prevention, early treatment and management of osteoporosis.

  14. Estimating population density and connectivity of American mink using spatial capture-recapture.

    PubMed

    Fuller, Angela K; Sutherland, Chris S; Royle, J Andrew; Hare, Matthew P

    2016-06-01

    Estimating the abundance or density of populations is fundamental to the conservation and management of species, and as landscapes become more fragmented, maintaining landscape connectivity has become one of the most important challenges for biodiversity conservation. Yet these two issues have never been formally integrated together in a model that simultaneously models abundance while accounting for connectivity of a landscape. We demonstrate an application of using capture-recapture to develop a model of animal density using a least-cost path model for individual encounter probability that accounts for non-Euclidean connectivity in a highly structured network. We utilized scat detection dogs (Canis lupus familiaris) as a means of collecting non-invasive genetic samples of American mink (Neovison vison) individuals and used spatial capture-recapture models (SCR) to gain inferences about mink population density and connectivity. Density of mink was not constant across the landscape, but rather increased with increasing distance from city, town, or village centers, and mink activity was associated with water. The SCR model allowed us to estimate the density and spatial distribution of individuals across a 388 km² area. The model was used to investigate patterns of space usage and to evaluate covariate effects on encounter probabilities, including differences between sexes. This study provides an application of capture-recapture models based on ecological distance, allowing us to directly estimate landscape connectivity. This approach should be widely applicable to provide simultaneous direct estimates of density, space usage, and landscape connectivity for many species.

  15. Response of bird species densities to habitat structure and fire history along a Midwestern open-forest gradient

    USGS Publications Warehouse

    Grundel, R.; Pavlovic, N.B.

    2007-01-01

    Oak savannas were historically common but are currently rare in the Midwestern United States. We assessed possible associations of bird species with savannas and other threatened habitats in the region by relating fire frequency and vegetation characteristics to seasonal densities of 72 bird species distributed across an open-forest gradient in northwestern Indiana. About one-third of the species did not exhibit statistically significant relationships with any combination of seven vegetation characteristics that included vegetation cover in five vertical strata, dead tree density, and tree height. For 40% of the remaining species, models best predicting species density incorporated tree density. Therefore, management based solely on manipulating tree density may not be an adequate strategy for managing bird populations along this open-forest gradient. Few species exhibited sharp peaks in predicted density under habitat conditions expected in restored savannas, suggesting that few savanna specialists occur among Midwestern bird species. When fire frequency, measured over fifteen years, was added to vegetation characteristics as a predictor of species density, it was incorporated into models for about one-quarter of species, suggesting that fire may modify habitat characteristics in ways that are important for birds but not captured by the structural habitat variables measured. Among those species, similar numbers had peaks in predicted density at low, intermediate, or high fire frequency. For species suggested by previous studies to have a preference for oak savannas along the open-forest gradient, estimated density was maximized at an average fire return interval of about one fire every three years. ?? The Cooper Ornithological Society 2007.

  16. Benchmarking density functionals for hydrogen-helium mixtures with quantum Monte Carlo: Energetics, pressures, and forces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.

    An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less

  17. Benchmarking density functionals for hydrogen-helium mixtures with quantum Monte Carlo: Energetics, pressures, and forces

    DOE PAGES

    Clay, Raymond C.; Holzmann, Markus; Ceperley, David M.; ...

    2016-01-19

    An accurate understanding of the phase diagram of dense hydrogen and helium mixtures is a crucial component in the construction of accurate models of Jupiter, Saturn, and Jovian extrasolar planets. Though DFT based rst principles methods have the potential to provide the accuracy and computational e ciency required for this task, recent benchmarking in hydrogen has shown that achieving this accuracy requires a judicious choice of functional, and a quanti cation of the errors introduced. In this work, we present a quantum Monte Carlo based benchmarking study of a wide range of density functionals for use in hydrogen-helium mixtures atmore » thermodynamic conditions relevant for Jovian planets. Not only do we continue our program of benchmarking energetics and pressures, but we deploy QMC based force estimators and use them to gain insights into how well the local liquid structure is captured by di erent density functionals. We nd that TPSS, BLYP and vdW-DF are the most accurate functionals by most metrics, and that the enthalpy, energy, and pressure errors are very well behaved as a function of helium concentration. Beyond this, we highlight and analyze the major error trends and relative di erences exhibited by the major classes of functionals, and estimate the magnitudes of these e ects when possible.« less

  18. Estimating Density and Temperature Dependence of Juvenile Vital Rates Using a Hidden Markov Model

    PubMed Central

    McElderry, Robert M.

    2017-01-01

    Organisms in the wild have cryptic life stages that are sensitive to changing environmental conditions and can be difficult to survey. In this study, I used mark-recapture methods to repeatedly survey Anaea aidea (Nymphalidae) caterpillars in nature, then modeled caterpillar demography as a hidden Markov process to assess if temporal variability in temperature and density influence the survival and growth of A. aidea over time. Individual encounter histories result from the joint likelihood of being alive and observed in a particular stage, and I have included hidden states by separating demography and observations into parallel and independent processes. I constructed a demographic matrix containing the probabilities of all possible fates for each stage, including hidden states, e.g., eggs and pupae. I observed both dead and live caterpillars with high probability. Peak caterpillar abundance attracted multiple predators, and survival of fifth instars declined as per capita predation rate increased through spring. A time lag between predator and prey abundance was likely the cause of improved fifth instar survival estimated at high density. Growth rates showed an increase with temperature, but the preferred model did not include temperature. This work illustrates how state-space models can include unobservable stages and hidden state processes to evaluate how environmental factors influence vital rates of cryptic life stages in the wild. PMID:28505138

  19. Concerning neutral flux shielding in the U-3M torsatron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreval, N. B., E-mail: mdreval@kipt.kharkov.ua

    2015-03-15

    The volume of the torsatron U-3M vacuum chamber is about 70 m{sup 3}, whereas the plasma volume is about 0.3 m{sup 3}. The large buffer volume of the chamber serves as a source of a substantial neutral flux into the U-3M plasma. A fraction of this flux falls onto the torsatron helical coils located in front of the plasma, due to which the dynamics of neutral influx into the plasma modifies. The shielding of the molecular flux from the buffer volume into the plasma is estimated using numerical calculations. Only about 10% of the incident flux reaches the plasma volume.more » Estimates show that about 20% of atoms escape beyond the helical coils without colliding with them. Under these conditions, the helical coils substantially affect the neutral flux. A discharge regime with a hot low-density plasma produced by a frame antenna is considered. The spatial distribution of the molecular density produced in this regime by the molecular flux from the chamber buffer volume after it has passed between the helical coils is calculated. The contributions of the fluxes emerging from the side and inner surfaces of the helical coils are considered. The calculations show that the shape of the spatial distribution of the molecular density differs substantially from the shape of the magnetic surfaces.« less

  20. Relativistic high-current electron-beam stopping-power characterization in solids and plasmas: collisional versus resistive effects.

    PubMed

    Vauzour, B; Santos, J J; Debayle, A; Hulin, S; Schlenvoigt, H-P; Vaisseau, X; Batani, D; Baton, S D; Honrubia, J J; Nicolaï, Ph; Beg, F N; Benocci, R; Chawla, S; Coury, M; Dorchies, F; Fourment, C; d'Humières, E; Jarrot, L C; McKenna, P; Rhee, Y J; Tikhonchuk, V T; Volpe, L; Yahia, V

    2012-12-21

    We present experimental and numerical results on intense-laser-pulse-produced fast electron beams transport through aluminum samples, either solid or compressed and heated by laser-induced planar shock propagation. Thanks to absolute K(α) yield measurements and its very good agreement with results from numerical simulations, we quantify the collisional and resistive fast electron stopping powers: for electron current densities of ≈ 8 × 10(10) A/cm(2) they reach 1.5 keV/μm and 0.8 keV/μm, respectively. For higher current densities up to 10(12)A/cm(2), numerical simulations show resistive and collisional energy losses at comparable levels. Analytical estimations predict the resistive stopping power will be kept on the level of 1 keV/μm for electron current densities of 10(14)A/cm(2), representative of the full-scale conditions in the fast ignition of inertially confined fusion targets.

Top